Quickstart
Making your first query is as easy as 1, 2, 3
Schedule a Demo to Learn More!
Steps
1) Install CareQuery
Inside your local terminal, pip install CareQuery
pip install care-query
2) CareQuery API Token and PEM Key
The Monocle Insights Support Team will provide API token and SFTP PEM key file for a seamless CareQuery experience, no matter the data size.
- Email helps us manage query access and send dynamic user updates for large data requests.
- Token gives access to make queries and read data from CareQuery.
- SFTP Key allows users to write query results to a reliable SFTP endpoint custom to each customer.
Contact Monocle Insights Support Team to request tokens.
3) Leverage CareQuery in Python
There are two user flows when leveraging CareQuery to optimize user experience and expectations, no matter the size and complexity of the data.
3.A) Basic User Flow for Supplemental and Aggregate Tables
CareQuery's supplemental and aggregate tables are refined and of a reasonable size which allows users to quickly set parameters and read data directly into their python notebook, ETL script or analytic workflow. Simply instantiate, build query and read the full query results, a sample or an estimate directly in memory.
# import
from care_query.care_query import CareQuery
# instantiate and connect
cq = CareQuery(email = "your-email",
token = "your-api-token",
sftp_key = "path/to/your/company.PEM")
# create first test query
query = cq.procAllowedAvgs(proc_subcategory = "medicine - neurology and neuromuscular procedures",
state = ["NY","SD"],
payor_channel = ["medicare", "commercial"])
# estimate query results size
print(query.estimate())
# request a sample
sample = query.sample()
# execute full query
data = query.execute()
3.B) User Flow for SCRIPT_TABLE and JOURNEY_TABLE Sourced Queries
Our longitudinal journey tables contain tens-of-billions of encounters and hundreds-of-billions of line items. Therefore we have a user flow where queries have an SFTP location as an intermediary between the user making the request and the CareQuery engine requesting, transforming and moving the data.
This extra step brings ease to automation efforts, take the compute load off of your local machine, and protect users from themselves (when asking for huge datasets their laptops might not be able to handle).
# 1) import
from care_query.care_query import CareQuery
# 2) instantiate and connect
cq = CareQuery(email = "your-email",
token = "your-api-token",
sftp_key = "path/to/your/company.PEM")
# 3) build query
query_name = cq.careEncounter(proc_code = ["99453","99454","99457","99458","99091"],
state = ["KS"],
min_date = "2022-01-01",
max_date = "2023-06-01",
limit = 10000)
# 4) submit query, data will return to SFTP endpoint
# could also run .sample() to return the first 1k rows or .execute() to get a size estimate
result_object = query_name.execute()
# 5) check query status
# user will be sent an email upon submission and completion, check query status along the way
query_name.queryStatus()
# 6) upon successful completion of query, read data directly into pandas environment
result_data = query_name.sftpToPandas()
Updated 11 months ago