Report Generation API
The Report Generation API is designed to facilitate the creation of secure, encrypted links and optionally QR codes for accessing clinical reports within the MD.ai reporting application. By sending a POST request with the necessary clinical tags, users can quickly generate the required report links. Additionally, a QR code image can also be generated for easy sharing and scanning. It is designed to be straightforward and flexible, enabling seamless integration with various applications and workflows.
This API works both for launching new reports as well as retrieving existing reports, which can be useful to check for their status (draft, final etc).
To integrate and generate new reports using our API, you will need a SiteID
specific to your organization that will be provided to you by MD.ai. Ensure that you have this identifier before proceeding.
Method: POST
API endpoint: https://chat.md.ai/api/report/launch/clinical
Request Header
Generating MD.ai access tokens
The MD.ai access token can be generated by logging in to chat.md.ai, clicking on your user avatar in the top right corner, and going to User Settings -> Access tokens. Make sure to add this token to the header as x-access-token
for authentication, otherwise the request will fail.
Request Body
-
clinicalInfo: A dictionary object that stores the clinical information/tags required for report creation that are related to the exam/study. The list of supported tags is given below (required):
clinicalInfo: SiteID: Unique identifier for your organization (will be provided by MD.ai). PatientMRN: Unique patient identifier or Patient MRN. Accession: Unique report identifier or Accession Number. PatientName: Name of the patient. PatientBirthDate: DOB of the patient. PatientAge: Age of the patient. PatientSex: Sex of the patient. Modality: Modality of the study (e.g., CT, MR, etc.). BodyPartExamined: Body part of interest for the report. StudyDescription: Brief description of the study. StudyDate: Date when the study was performed. StudyTime: Time of the study. StudyInstanceUID: Study UID of the imaging study related to the report. ReasonForExam: Reason for the exam order, or patient history/symptoms. KeyFindings: Draft report or text that you want to pre-populate the report with. ReferringPhysician: Name/credentials of the patient's referring physician. ReportingPhysician: Name/credentials of the patient's reporting physician. Notes: Additional clinical notes or context, such as medication, allergies, labs, pathology results, etc.
-
reportingApp: A dictionary object that stores parameters related to configuring the MD.ai reporting application such as defining the format in which the final report needs to be sent back (eg. HL7) or setting the speech-to-text and UI language. The list of parameters is given below (required).
reportingApp: AppLanguage: Language for UI localization. The application interface will render in the specified language. SpeechLanguage: Language used for dictation and speech recognition. Ensures the speech-to-text system processes spoken inputs in the correct language. ReturnFormat: Format for returning the report back to your system. "HL7" is supported. ApplyTemplate: Optional template number to automatically map key findings to when applying AI. SyncLaunch: Boolean flag to keep the report window synced for new report launches. Read more about this in the Integration Guide section. AutoComparePrior: Boolean flag to enable auto-compare prior report findings.
If you want to sync and open new reports in the same or already opened reporting window on going to a new study in your application/worklist, please refer to the integration guide here.
Language Settings for Non-English Usage
Many of our AI features rely on the language settings, so for non-English usage the AppLanguage
and SpeechLanguage
tags in the request are required. If not provided, both tags will default to en
(English). For a full list of supported languages please refer to the list here.
-
response: A dictionary object that specifies the type of output to be returned by the API, whether it's just the encrypted report URL, the QR code image URL, or both. It also includes options to control the size and mode of the QR code. (required)
response: type: link: Returns only the encrypted report URL qrcode: Returns only the QR code image URL all: Returns both the encrypted report URL and the QR code image URL # If 'qrcode' or 'all' was used in 'response: {type: }' to return a QR code, use this parameter to control the size of the QR code returned. (**optional**) qrCodeOptions: size: standard: The width of the QR code will be 400 pixels. (default) mini: The width of the QR code will be 80 pixels. mode: light: Suitable for light mode/white backgrounds. (default) dark: Suitable for dark mode/black backgrounds.
-
userInfo: A dictionary object used to assign reports to a specific user. By default reports are assigned to the user making the API call based on the access token used, except for iframe integrations where this tag is required.(optional, but required for iframe integration)
-
lazyLaunch: A boolean value that determines when the report is actually created and assigned. (optional)
lazyLaunch: false: The report is created, assigned, and saved in the backend immediately when the API request is made. (default) true: The API will generate and return a link to the report and/or QR code without immediately creating the actual report in the backend. The report will only be created, assigned, and saved when the generated link is accessed for the first time. This can be useful for scenarios where you want to defer the creation of the report until it is actually needed.
-
dicomSR: A dictionary object that contains the base64 encoded representation of the DICOM SR file to be imported as the report. The clinical information will be read from the DICOM SR and added to the respective locations. In case both
clinicalInfo
anddicomSR
are provided in the API call, then values from theclinicalInfo
will take precendence. (optional) -
keyImages: A list of dictionary objects that contains key images to be attached to the report. (optional)
HL7 Integration
For receiving reports back as HL7 or for any custom integration, we’ll work with your team to configure the system/endpoint details. This workflow will only be triggered after the report is signed/finalized by the user.
Example request
Here is an example of the request body with all the tags described above:
{
"clinicalInfo": {
"SiteID": "test",
"PatientMRN": "test-mrn",
"Accession": "test-accession",
"PatientName": "John Doe",
"PatientBirthDate": "2024-01-01",
"PatientAge": "50",
"PatientSex": "M",
"Modality": "CT",
"BodyPartExamined": "Abdomen",
"StudyDescription": "CT abdomen w/wo contrast",
"StudyDate": "2024-01-01",
"StudyTime": "122009",
"StudyInstanceUID": "1.2.1.2",
"ReasonForExam": "History of colon cancer and new onset abdominal pain.",
"KeyFindings": "This is a test report",
"ReferringPhysician": "Dr. John Doe, M.D.",
"ReportingPhysician": "Dr. Jane Doe, M.D.",
"Notes": "No allergies or medications"
},
"reportingApp": {
"AppLanguage": "en",
"SpeechLanguage": "en",
"ReturnFormat": "HL7",
"SyncLaunch": true
},
"userInfo": {
"UserName": "Test user",
"UserEmail": "test-user@md.ai"
},
"response": {
"type": "all",
"qrCodeOptions": {
"size": "mini",
"mode": "dark"
}
},
"lazyLaunch": false,
"dicomSR": {
"fileData": "9j/4AAQSkZJRgABAQAASABIAA"
},
"keyImages": [
{
"imageData": "9j/4AAQSkZJRgABAQAASABIAA",
"contentType": "image/jpeg",
"fileName": "first.jpeg"
},
{
"imageData": "9j/4AAQSkZJRgABAQAASABIAA",
"contentType": "image/jpeg",
"fileName": "second.jpeg"
}
]
}
Response
The response for this POST request will be a JSON object that contains the following keys:
-
status: Indicates the current status of the report, allowing users to determine whether the report is finalized, still being edited, or pending creation.
Final: The report is finalized and signed. No further changes can be made. Draft: The report is still being edited and has not been signed or finalized. PreLaunch: This special status is returned when `lazyLaunch` is set to `true`. It indicates that the report has not yet been created or assigned; only the link has been generated.
-
owner: Contains information about the owner of the report - name and email.
- reportLink: Contains the URL for the generated report based on the
tags
provided. This link is encrypted and does not display the actual tags that were sent, to ensure secure access to the report. - reportQrCode: If
qrcode
orall
was used inresponse
to return a QR code, this field contains a base64-encoded string representing the QR code image to access the report. The base64 string starts withdata:image/png;base64,
followed by the encoded image data.
Example response
-
If the report already exists and is signed/finalized with
response: {"type": “link”}
: -
For
“response”: {"type": “all”}
and“lazyLaunch”: “false”
: -
For
“response”: {"type": "link"}
and“lazyLaunch”: “True”
:
Note that there is no owner
here since we are just generating a link in advance without actually assigning or launching the report.
Supported Languages
MD.ai provides a comprehensive multilingual reporting system. To customize the language in your reports, use the corresponding Code
from the table below when configuring the SpeechLanguage
(for the speech-to-text language) and AppLanguage
(for the UI language) parameters in our report generation API. Alternatively, you can select the preferred language directly from the user settings.
The currently supported languages are listed below, defaulting to en
(English US):
Code | Language |
---|---|
en | English (US) |
en-AU | English (Australia) |
es | Spanish |
fr | French |
ja | Japanese |
de | German |
nl | Dutch |
ko | Korean |
pl | Polish |
ro | Romanian |
id | Indonesian |
pt | Portuguese (Brazil) |
zh-CN | Chinese Simplified |
zh-Hant | Chinese Traditional |