The Perfect Data Strategy for Improved Business Analytics

Advancements in AI and Machine Learning have given rise to data analytics’s growing importance and, therefore, data itself. Unless you have established the pre-requisite data collection steps, data storage, and data preparation, it is impossible to make a move to the data science process.

At Allwyn, we believe that the journey towards improved operations and decision-making starts with establishing a good data strategy and establishing the tools and processes required to easily analyze your enterprise data. This involves starting with your data discovery and data collection, organizing the data in a data warehouse or a data lake, and finally using Machine Learning to perform deep data analytics to enhance productivity, launch new business models or establish a strong competitive advantage. We have an established data life cycle process that starts with data discovery and ends with reaching business outcomes through Data Analysis, Machine Learning, and AI. We employ a two-phased approach to data transformation and operational transformation, as shown below.

In the first phase of data transformation, our goal is to design, build and maintain an enterprise data warehouse or a data lake. This helps in making the most of an organization’s valuable data assets, break down data silos, and create a data maturity model that helps accelerate providing accurate and near-real-time data for the next phase. During this phase, we also establish data governance that focuses on the privacy and security of the data.

The second phase focuses on data analytics – predictive, prescriptive or diagnostic analytics that can help the various departments of your business with actionable insights. In this phase, we also help with rapid prototyping and experimenting with advanced analytics such as machine learning and AI. We help you adopt machine learning into your data analytics to help with your product innovation and offering you a competitive edge in the marketplace.

Our data management strategy provides an enterprise with quick and complete access to the data and the analytics it needs through four steps.

Our four-step solution for Enterprise Data Management is elaborated below.

  1. Collect: Ingestion/Data Prep/Data Quality/Transformation

In this step, we access and analyze both real-time and stationary data to reliably determine data quality and extract, transform, and blend data from multiple sources.  We then map and prepare the data to load into a target Data Lake. It is important to identify all your data sources and data streams to determine your data acquisition and establish the frequency of your batch process. This also involves establishing your infrastructure to help with the high volume of data streams and supporting a distributed environment.

Since multiple systems exist in silos, to make data-driven decisions with all members of the organization not operating off of the same data, Businesses these days are moving towards a single source of truth model to overcome this challenge.

With a single source of truth (SSOT), data is aggregated from many systems within an organization to a single location. This ensures zero duplication and hence, enhances the data quality. A SSOT is not a system, tool, or strategy, but rather a state of being for a company’s data in that it can all be found via a single reference point.

  1. Store

We use a scalable, reliable (a Cloud-Based Data Lake) comprised of various data repositories for both structured and unstructured formats to ensure reliable data storage. In this step, you cleanse, categorize, and store the data as per your business functions. For example, you can establish separate functional areas for sales, marketing, finance, and procurement-related data. This will help you establish a functional unit while identifying the need for data integrators across functions.

  1. Process/Analyze

Once the data is identified, organized, and stored, your data is ready for data analysis, building machine learning models, or statistical analysis. Data analysts or data scientists can run multiple queries or develop algorithms to analyze trends, discover business intelligence, and present outcomes to make smart decisions.

  1. Visualize

The output of the data analysis needs to be presented in a visual dashboard to provide meaningful answers to key questions driving business decisions.  Here, we not only provide insightful visual dashboards but provide search-driven “Google-like” products with Natural language processing capabilities to provide answers to easy-to-understand presentations for all levels of data users and the public. With products like Thought Spot, users can type a simple Google-like search in natural language to instantly analyze billions of rows of data.  Users can also easily converse with data using casual, everyday language and get precise answers to questions instantly.

Summary

Getting your data strategy in place is the first step to start with data analytics, data science and the AI journey. As the marketplace continues to rattle business models, adopting newer data analytics tools such as machine learning can help you not only stay ahead of competition but also continue to operate your business successfully in uncertain times. This can lead to a data-driven value cycle that can help pave the way for accomplishing the transformational change that is essential to become an AI-enabled organization.

Watch this space or follow us on LinkedIn to stay tuned to the latest digital trends and technology advancements.

Read More

Eliminating Major Barriers for Data Insights

The lifecycle of Data, Data Analytics, and Data Science starts with collecting data from relevant data sources, performing ETL (Extraction, Loading, and Transformation) functions, cleaning, and enabling data in a machine-readable format. Once the data is ready, statistical analysis or machine learning algorithms can identify patterns, predict outcomes, or even perform functions using Natural Language Processing (NLP). Since data is at the core of data analytics, it is imperative to understand the challenges we possibly might face during its successful implementation. Here we present the top four data challenges :

Complexity: Data spread across various sources

Merging data from multiple sources is a major challenge for most enterprise organizations. According to McAfee, an enterprise with an average of 500 employees can deploy more than 20 applications. Larger enterprises with more than 50,000 employees run more than 700 applications. Unifying the data from these applications is a complicated task that can lead to duplication, inconsistency, discrepancies, and errors. With the help of data integration and profiling, the accuracy, completeness, and validity of the data can be determined.

Quality: Quality of incoming Data

One of the common data quality issues in the merging process is duplicate records. Multiple copies of the same record can lead to inaccurate insights as well as computation and storage overuse.

What if the collected data is missing, inconsistent, and not updated? Data verification and matching methods need to be implemented at each collection point to prevent flawed insights and biased results.

Volume: Volume of data available

To find relationships and correlations, a successful machine learning algorithm depends on large volumes of data. Data collected from multiple sources and multiple time frames is essential in creating machine learning models during training, validation, and deployment phases. More data does not necessarily mean gathering more records but can mean adding more features to the existing data from different sources that can improve the algorithm.

Algorithm: Conscious effort to remove confirmation bias from the approach

The major advantage of AI over humans is garnering insights into an algorithm’s decision-making process (using explainable AI). Furthermore, algorithms can be analyzed for biases, and their outcomes verified for unfair advantages to protected classes. Although AI, on the onset, can be viewed as perpetuating human biases, it offers better insights into the data and decision-making process.

Over the last decade, Allwyn has surpassed these common Data challenges with the proven experience of its seasoned Data professionals. We will share our own Data Management Strategy in next week’s post. Watch this space or follow us on LinkedIn to stay tuned.

Read More

Machine Learning to Improve Outcomes by Analyzing Lung Cancer Data

Finding a suitable dataset for machine learning to predict readmission was the first challenging task we had to overcome. Since, presently available datasets in the healthcare world, could either be dirty and unstructured or clean but lacking information.

Most patient-level data are not publicly available for research due to privacy reasons.

With these limitations in mind, after researching multiple data sources, including SEER-MEDICARE, HCUP, and public repositories, we decided to choose the Nationwide Readmissions Database (NRD) from Healthcare Cost and Utilization Project (HCUP). The Agency creates the HCUP databases for Healthcare Research and Quality (AHRQ) through a Federal-State-Industry partnership, and NRD is a unique database designed to support various types of analyses of national readmission rates for all patients, regardless of the expected payer for the hospital stay.

Our research involved using machine learning and statistical methods to analyze NRD. Data understanding, preparation, and engineering were the most time-consuming and complex phases of this data science project, which took nearly seventy percent of the overall time.

Using big data processing and extraction technologies like Spark and Python, 40 million patients’ records were filtered. (only the ones who have at least undergone a lobectomy procedure once). The filtered data was later put through the best data quality check processes and cleaned while imputing missing values.  And more than 100 input variables were explored that were analyzed correlations with the outcome and understood our target group’s demographics or were redundant.

Many of these features were categorical that required additional research and feature engineering.

NRD dataset mainly consists of three main files: Core, Hospital, Severity.

Core file mainly included the patient-level medical and non-medical factors like their age, gender, payment category, urban/rural location of a patient, and many more are among the socioeconomic factors. However, medical factors include detailed information about every diagnosis code, procedure code, their respective diagnosis-related groups (DRG), time of those procedures, yearly quarter of the admission, etc.

Allwyn data engineering practices included analyzing every single feature, researching, and creating data dictionaries and feature transformation to see which features contribute to our prediction algorithms.  With an average age of 65 for lobectomy patients, the data showed that women had more lobectomies than men, more men were readmitted than women.

Severity file further provided us the summarized severity level of the diagnosis codes. The Hospital dataset presented us information with hospital-level information such as bed size, control/ownership of the hospital, urban/rural designation, and teaching status of urban hospitals, etc.

We consulted subject matter experts in the lung cancer field and, through their advice, added additional features such as Elixhauser and Charlson comorbidity indices to enrich our existing dataset. By delving deep into the clinical features, we also ensured the chosen variables are pre-procedure information and verified no information leakage from post-operative or known future level variables.

The features were then analyzed to check whether they had statistical significance with our selection of predictive models by looking at correlation matrices and feature importance charts.

Analyzing the initial data distribution for many of the features required us to remove outliers, transform skewed distributions, and scale the majority of the features for algorithms that were particularly sensitive to non-normalized variables. Diagnosis codes were grouped into 22 categories to reduce dimensionality and improve interpretation.

The resulting dataset was highly imbalanced in terms of the readmitted and not readmitted classes, 8% and 92%, respectively. Most classification models are extremely sensitive to imbalanced datasets, and multiple data balancing techniques such as oversampling the minority class, under-sampling the majority class, and Synthetic Minority Oversampling Technique (SMOTE) were used to train our algorithms and compare the outcomes.

Initial machine learning models had both low precision and recall scores. Although this could be due to many different reasons, the Allwyn team focused mainly on additional feature engineering to remove the high dimensionality of initial input variables while also comparing different data balancing methods. This was a time-consuming iterative process and required training more than a thousand different models on different combinations or groupings of diagnosis codes (shown in Table 2) along with other non-medical factors.

K-fold cross-validation was also used during the training and validation to ensure the training results represent the testing. We weighted the admission and readmission classes by training models and comparing their validation scores to classify the readmitted patients further.

We also collaborated with George Mason University through their DAEN Capstone program.  The team led by Dr. James Baldo and several participants from the graduate program analyzed the underlying data and developed predictive models using various technologies, including AWS SageMaker Autopilot. The resulting models and their respective hyperparameters were further analyzed and tuned to achieve high recall.

After choosing the best model, we designed and implemented this workflow in Alteryx Designer to automate our process and put it into a feedback-re-evaluation phase as a Cross-Industry Standard Process for Data Mining (CRISP-DM) to enable our model to evolve and be deployed in production.

To know more about how we decided on the best model and associated classification methods, follow us on LinkedIn.

 

Read More

Predicting hospital readmissions and underlying risk factors of Lung Cancer with Machine Learning

Readmission after pulmonary lobectomy is a frequent challenge for hospitals, healthcare plans, and insurance providers. Readmission is a condition when a patient is admitted to a hospital for any reason within 30 days of discharge from their hospital. Re-occurring problems and readmissions have been a major issue in the healthcare system. Readmissions are often costly; however, their findings can be incredibly beneficial for both the public and healthcare industries. With this in consideration, to improve Americans’ healthcare, Hospital Readmissions Reduction Program (HRRP) was brought in motion by the Centers for Medicare & Medicaid Services (CMS). This program penalizes hospitals with excessive readmissions.

Allwyn is developing a machine learning based approach to reduce readmissions by recommending data-driven preventive actions prior to a lobectomy procedure. This approach can be used by various organizations such as hospitals or healthcare companies to take proactive measures and circumvent readmissions by predicting:

  • The probability of a patient’s readmission
  • Underlying risk factors

We will be sharing the challenges with Data Exploration and Engineering, followed by our Strategy and its impact. Follow us on LinkedIn as we share our approach in the coming weeks.

Read More