LynxCare attended the AI & Big Data Expo North America, the leading Artificial Intelligence & Big Data Conference & Exhibition which took place on November 28-29 at the Santa Clara Convention Center, Silicon Valley.
It showcased the next generation technologies and strategies from the world of Artificial Intelligence & Big Data, providing an opportunity to explore and discover the practical and successful implementation of AI & Big Data that is driving business forward in 2018 and beyond.
We are eager to share our experiences as we attended exciting talks and panels but first we would like to talk more about this healthcare/AI startup we came across at the expo: mfine.
mfine is a Hospital & Health Care Start up, created in 2017 in India by co-founders Ajit Narayanan, Ashutosh Lawania, Prasad Kompalli. mfine is an AI-powered healthcare platform that helps you connect with top doctors from some of the best hospitals online in under 60 seconds. The app wants to consider itself a primary care physician in everyone’s hand.
In addition, Ajit shared how the app provides comprehensive clinical summaries in order to help doctors understand the lifestyle and current illness of the patient. Moreover, with the help of AI the app offers different types of diagnosis (applicable for more than 1000 types of diseases), therefore powering specialists with provisional treatment plans. With mfine, doctors can focus less on documentation and more on the patient, as communication becomes key with standardized protocols for triage, follow-up and long-term care.
10 things you should know about AI
Another interesting discussion we joined was led by Emrah Gultekin, co-founder and CEO of Chooch, the new standard in AI training, tagging, and predictions for archived and live digital assets.
Emrah presented the 10 things everyone should know about AI.
- AI is not evil. One of the biggest misconceptions today is that current AI applications are going to steal your job (see the radiology example from our previous blog).
- AI is not a machine. AI is more like a dream, a concept. We already have some components of AI but if we would to place how far we are on a scale from 1 to 10, we would only have 1% of the components.
- AI theory and algorithms are not new. Focusing on predictions and correlations instead of causality is not new.
- AI has 3 main processes. The first one is data creation, then comes the training (deep learning, frameworks etc.) and finally the predictions.
- AI need’s to be trained. You can’t expect from AI to automatically and instantly process data, the three processes mentioned above are mandatory every single time AI is solicited.
- AI’s have specific intelligence today. AI is great when it comes to very specific topics, but is terrible when we apply it to general ones.
- Data is not everything. AI is only great when you are training a specific model.
- Minimizing bias is a major challenge. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful.
- AI doesn’t work out of the box. Even if you use standardized packages, you will need to create your own code.
- No one has a monopoly on AI. There is still a lot to discover about AI and we need different teams/companies to work on it.
Panel discussion AI for social good
AI’s impact across Health, Environmental Sustainability, Education & Public Welfare
During this discussion, David Ledbetter, senior data scientist at the Children’s Hospital of Los Angeles, shared his thoughts about AI in Healthcare.
Here are some of the major issues raised:
Open-source AI is already established, unfortunately we can’t say the same thing about data access. One of the question asked was “How can we ensure that health data are openly accessible to everyone?” Since medical data is siloed into various institutions and special formats, it is still an enormous effort to have ‘universal accessible data’ – if that even exists. The biggest issue is the amount of money going into EHR systems that aren’t open to share data. Ideally we would have a universal EHR system that allows data sharing and ensures data becomes easily accessible to third parties. However, we still have a long way to go and this might not be realistic.
Another question was raised concerning Ethics and AI and the issue about about how we might implement and include technology into the right social context. David, pointed out that the concept ‘bias’ is different in healthcare (than in Education or Environmental sustainability for example). The treatment for cardiovascular disease for one child might differ from the treatment for another child, e.g. one child might have a specific blood type and have allergies while the other child doesn’t, and those are important factors to take into consideration while undergoing surgery. In this case the bias is needed as we clearly need to differentiate the treatments depending on the child.
An interesting closing question asked was the following one “Taking into account how fast technology is evolving, where do we see AI in healthcare in 5 years from now?”. They were positive about AI being a standard part of image processing as there is currently a lot of progress in detecting and tracking diseases using AI. However, the adoption of AI in medical/clinical data (text) space is going much slower because there is no pool of money available to make the infrastructural change that might be needed.
We do believe that we, at Lynxcare, can support the evolution of AI in processing clinical outcome data. As our AI-powered big data platform is agnostic to the EMR system and data formats being used, it’s a flexible and customizable platform for clinical departments. Starting with smaller projects where we can illustrate the ROI for the whole department might be a more realistic approach than spending a lot of money on infrastructural changes – e.g. creating a universal EMR.