Predicting Length of Stay of Patients with Lung Cancer and Mental Illness

For determining the Correlation between lung cancer patients who have undergone lobectomy and have a mental illness, the team developed multiple machine learning models. For this study, we split the data into 80% training data (4464 sample data) and 20% test data (1117 sample data).

We divided this problem statement into two areas of evaluation:

  • Predicting LOS of a patient with both lung cancer and mental illness using only Diagnosis codes.
  • Predicting LOS of a patient with both lung cancer and mental illness using both Diagnosis codes and Socio-demographic features.

The following algorithms were then developed:

  • SGDRegressor
  • GradientBoostingRegressor
  • LinearRegression
  • KNeighborsRegressor
  • RandomForestRegressor
  • SVR
  • TensorFlow

(more…)

Read More

Determining Correlation between Mental Illness and Lung Cancer using Machine Learning

DATA DISTRIBUTION

For any machine learning algorithm to be designed, it is important to understand the variability of the data and skewness, as well as the assumptions that we can make to build machine learning models. Here are some key statistical distribution models of the dataset we used for our study:

(more…)

Read More

Exploratory Analysis of Mental Illness Data amongst Lung Cancer Patients

During the course of our study, we specifically focused on lung cancer patients who have undergone lobectomy (lung cancer surgery) and analyze if any specific mental illness/psychiatric diagnoses or groups of diagnoses increase perioperative death risk.

(more…)

Read More

Analyzing Severe Mental Illness in Lung Cancer Patients

Lung cancer is the number one cause of cancer-related deaths worldwide. Patients with severe mental illness (SMI) are a group who are overrepresented in the lung cancer population. SMI refers to psychological problems, including mood disorders, major depression, schizophrenia, bipolar disorder, and substance abuse disorders, that inhibit a person’s ability to engage in functional and occupational activities.

Cancer patients diagnosed with SMI may not adhere to treatment plans and may have reduced access to healthcare. Individuals with SMI may have advanced tumor growth at diagnosis due to factors such as limited access to healthcare and healthcare systems. The aggregation of inadequate healthcare and increased risk for somatic disorders in patients with SMI can explain higher mortality rates. Many research papers have indicated that cancer represents a significant proportion of excess mortality for people with mental illness. Mental illness is typically associated with suicide, but much of the excess mortality rates associated with mental illness are due to cardiovascular or respiratory diseases and cancer. (more…)

Read More

The Perfect Data Strategy for Improved Business Analytics

Advancements in AI and Machine Learning have given rise to data analytics’s growing importance and, therefore, data itself. Unless you have established the pre-requisite data collection steps, data storage, and data preparation, it is impossible to make a move to the data science process.

At Allwyn, we believe that the journey towards improved operations and decision-making starts with establishing a good data strategy and establishing the tools and processes required to easily analyze your enterprise data. This involves starting with your data discovery and data collection, organizing the data in a data warehouse or a data lake, and finally using Machine Learning to perform deep data analytics to enhance productivity, launch new business models or establish a strong competitive advantage. We have an established data life cycle process that starts with data discovery and ends with reaching business outcomes through Data Analysis, Machine Learning, and AI. We employ a two-phased approach to data transformation and operational transformation, as shown below.

In the first phase of data transformation, our goal is to design, build and maintain an enterprise data warehouse or a data lake. This helps in making the most of an organization’s valuable data assets, break down data silos, and create a data maturity model that helps accelerate providing accurate and near-real-time data for the next phase. During this phase, we also establish data governance that focuses on the privacy and security of the data.

The second phase focuses on data analytics – predictive, prescriptive or diagnostic analytics that can help the various departments of your business with actionable insights. In this phase, we also help with rapid prototyping and experimenting with advanced analytics such as machine learning and AI. We help you adopt machine learning into your data analytics to help with your product innovation and offering you a competitive edge in the marketplace.

Our data management strategy provides an enterprise with quick and complete access to the data and the analytics it needs through four steps.

Our four-step solution for Enterprise Data Management is elaborated below.

  1. Collect: Ingestion/Data Prep/Data Quality/Transformation

In this step, we access and analyze both real-time and stationary data to reliably determine data quality and extract, transform, and blend data from multiple sources.  We then map and prepare the data to load into a target Data Lake. It is important to identify all your data sources and data streams to determine your data acquisition and establish the frequency of your batch process. This also involves establishing your infrastructure to help with the high volume of data streams and supporting a distributed environment.

Since multiple systems exist in silos, to make data-driven decisions with all members of the organization not operating off of the same data, Businesses these days are moving towards a single source of truth model to overcome this challenge.

With a single source of truth (SSOT), data is aggregated from many systems within an organization to a single location. This ensures zero duplication and hence, enhances the data quality. A SSOT is not a system, tool, or strategy, but rather a state of being for a company’s data in that it can all be found via a single reference point.

  1. Store

We use a scalable, reliable (a Cloud-Based Data Lake) comprised of various data repositories for both structured and unstructured formats to ensure reliable data storage. In this step, you cleanse, categorize, and store the data as per your business functions. For example, you can establish separate functional areas for sales, marketing, finance, and procurement-related data. This will help you establish a functional unit while identifying the need for data integrators across functions.

  1. Process/Analyze

Once the data is identified, organized, and stored, your data is ready for data analysis, building machine learning models, or statistical analysis. Data analysts or data scientists can run multiple queries or develop algorithms to analyze trends, discover business intelligence, and present outcomes to make smart decisions.

  1. Visualize

The output of the data analysis needs to be presented in a visual dashboard to provide meaningful answers to key questions driving business decisions.  Here, we not only provide insightful visual dashboards but provide search-driven “Google-like” products with Natural language processing capabilities to provide answers to easy-to-understand presentations for all levels of data users and the public. With products like Thought Spot, users can type a simple Google-like search in natural language to instantly analyze billions of rows of data.  Users can also easily converse with data using casual, everyday language and get precise answers to questions instantly.

Summary

Getting your data strategy in place is the first step to start with data analytics, data science and the AI journey. As the marketplace continues to rattle business models, adopting newer data analytics tools such as machine learning can help you not only stay ahead of competition but also continue to operate your business successfully in uncertain times. This can lead to a data-driven value cycle that can help pave the way for accomplishing the transformational change that is essential to become an AI-enabled organization.

Watch this space or follow us on LinkedIn to stay tuned to the latest digital trends and technology advancements.

Read More

Eliminating Major Barriers for Data Insights

The lifecycle of Data, Data Analytics, and Data Science starts with collecting data from relevant data sources, performing ETL (Extraction, Loading, and Transformation) functions, cleaning, and enabling data in a machine-readable format. Once the data is ready, statistical analysis or machine learning algorithms can identify patterns, predict outcomes, or even perform functions using Natural Language Processing (NLP). Since data is at the core of data analytics, it is imperative to understand the challenges we possibly might face during its successful implementation. Here we present the top four data challenges :

Complexity: Data spread across various sources

Merging data from multiple sources is a major challenge for most enterprise organizations. According to McAfee, an enterprise with an average of 500 employees can deploy more than 20 applications. Larger enterprises with more than 50,000 employees run more than 700 applications. Unifying the data from these applications is a complicated task that can lead to duplication, inconsistency, discrepancies, and errors. With the help of data integration and profiling, the accuracy, completeness, and validity of the data can be determined.

Quality: Quality of incoming Data

One of the common data quality issues in the merging process is duplicate records. Multiple copies of the same record can lead to inaccurate insights as well as computation and storage overuse.

What if the collected data is missing, inconsistent, and not updated? Data verification and matching methods need to be implemented at each collection point to prevent flawed insights and biased results.

Volume: Volume of data available

To find relationships and correlations, a successful machine learning algorithm depends on large volumes of data. Data collected from multiple sources and multiple time frames is essential in creating machine learning models during training, validation, and deployment phases. More data does not necessarily mean gathering more records but can mean adding more features to the existing data from different sources that can improve the algorithm.

Algorithm: Conscious effort to remove confirmation bias from the approach

The major advantage of AI over humans is garnering insights into an algorithm’s decision-making process (using explainable AI). Furthermore, algorithms can be analyzed for biases, and their outcomes verified for unfair advantages to protected classes. Although AI, on the onset, can be viewed as perpetuating human biases, it offers better insights into the data and decision-making process.

Over the last decade, Allwyn has surpassed these common Data challenges with the proven experience of its seasoned Data professionals. We will share our own Data Management Strategy in next week’s post. Watch this space or follow us on LinkedIn to stay tuned.

Read More