Pipeline Editor

Gain a comprehensive understanding of a data pipeline

Introduction

The Pipeline Editor is a great tool for users of Kleene to gain a comprehensive understanding of a data pipeline, from its lineage to the details of a chosen table or transform object. It is also possible to edit existing transforms from the Pipeline Editor. To get started, select a table or transform object either by searching for its name or choosing it from the appropriate schema or transform folder.

Accessing the Pipeline Editor

Select Pipeline Editor from the menu list.

Screenshot 2025-08-12 at 12.12.17.png

The landing page displays all transforms and tables. You can search for transforms using the input field, navigate to transforms and tables in your transform folder or schema, and access both the SQL console and SQL generator from this page.

image.png

To open transform: a) Search for a specific transform in the input field. A dropdown list appears showing search options by 'Name' or 'SQL'. Click on the transform you want to open.

Screenshot 2025-08-12 at 14.37.37.png

b) Click on the transform group to display the list of transforms. Then click on a specific transform to open its pipeline view.


To open table: Click on database → schema to view the list of tables.

Select table from schema

Select table from schema


Table in pipeline editor

Table in pipeline editor

Create a new transform using either the 'Pipeline editor' or 'SQL generator'.

Screenshot 2025-08-12 at 13.36.56.png

Click on 'Pipeline editor' to display the 'Create new transform' modal. Enter the transform name, select a group and click the 'Create' button


Click on 'SQL Generator' to display the SQL generator interface.

Screenshot 2025-08-12 at 13.46.30.png

Navigating the Pipeline View

View the complete lineage of selected objects, with upstream nodes on the left and downstream nodes on the right.

Toggle between viewing all nodes, upstream nodes only, or downstream nodes only.

Zoom, pan, and highlight dependencies for clearer visualisation

Open new objects by double-clicking nodes.

To add a new transform: Hover over a table node and click the '+' button to open the pre-populated 'Create new transform' modal.

Pre-populated 'Create new transform' modal is displayed. Edit as preferred and click the 'Create' button.

A new transform and table node is displayed in the relevant pipeline branch.

Right-click a transform node to access menu options including 'Run' and 'Delete'.


Working with Transforms

Run transforms directly from the pipeline view. A dropdown menu offers three options: 'Run upstream,' 'Run downstream,' or 'Run this one'.

Edit SQL code directly in the editor and save changes by clicking the 'Save' button.

'Preview SQL' to view the SQL code within a table.

Click the 'Add unit test' button for options to add unit tests.


Configuring Transform Settings (already exists)

  • View execution logs
  • Settings: set active status, manage dependencies and configure schedules
  • Access version history with diff comparisons
  • Set up webhooks (inbound and outbound)
  • Configure notifications
  • Perform management actions like deleting transforms

Working with Tables

Preview data to examine a sample of the table's contents (limited to 1000 rows).

Examine column names and data type.

Click 'Unit test results' to view all test outcomes.


Example Use Cases

Create a New Transform via the Pipeline Editor SQL Console

Context: A data analyst needs to create a new transform to process data from an existing table.

User Goal: Create a SQL transform that processes data from an existing table and outputs results to a new table.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. Click on "SQL Console" in the Pipeline Editor interface
  3. Write your transformation SQL code in the console
  4. Preview and test your SQL to ensure it works correctly
  5. Save the transform with a descriptive name
  6. Configure settings as needed (scheduling, dependencies, etc.)
  7. Run the transform to verify it works properly

Expected Outcome: A new transform is created, saved, and successfully executed, with output data available in the specified table.

Explore Data Lineage with an Existing Transform or Table

Context: A data analyst needs to trace data flow and understand dependencies between tables and transforms.

User Goal: View and analyse the complete data lineage of an existing transform or table to understand its relationships with other components.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. For transforms: Search for a specific transform in the input field or navigate through the transform folders then click on a specific transform to open its pipeline view
  3. For tables: Click on database → schema to display the list of tables, then select the desired table
  4. Examine the complete lineage showing upstream nodes (left) and downstream nodes (right)
  5. Review the dependencies and relationships between components
  6. Navigate deeper into the pipeline by double-clicking on related nodes to open them in a new open

Expected Outcome: The user gains clear insight into how data flows through the selected transform or table, including all upstream dependencies and downstream impacts.

Opening and Running an Existing Transform

Context: A data analyst needs to verify the functionality of an existing transform or ensure it processes the latest data.

User Goal: Open an existing transform in the Pipeline Editor and execute it to verify it works correctly or to process new data.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. Locate the transform by either:
    • Searching for the specific transform in the input field
    • Navigating through transform folders in the sidebar
  3. Open the transform by clicking on it to view its pipeline view
  4. Run the transform directly from the editor
  5. Review the output to verify the transform executed successfully

Expected Outcome: The transform is successfully executed with the latest data, and the output table is updated accordingly.

Modifying an Existing Transform

Context: A data analyst needs to update an existing SQL transform to accommodate additional business requirements or fix issues.

User Goal: Modify the SQL code in an existing transform and execute it to verify the changes work correctly.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. Locate the transform by either:
    • Searching for the specific transform in the input field
    • Navigating through transform folders in the sidebar
  3. Open the transform by clicking on it to view its pipeline view
  4. Edit the SQL code by typing directly into the sql editor
  5. Save the modified transform by clicking "Save"
  6. Run the transform directly from the editor to verify changes

Expected Outcome: The transform is successfully updated with the new SQL code, saved, and executed, confirming that the modifications work as intended.

Scheduling a Transform with Downstream Dependencies

Context: A data analyst needs to execute a scheduled a transform and it’s dependencies

User Goal: Schedule a transform that has downstream dependencies and view the execution timeline in a Gantt chart to understand the impact on the entire data pipeline.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. Locate the transform by searching for it or navigating through transform folders
  3. Open the transform to view its pipeline view showing upstream and downstream dependencies
  4. Configure the schedule for the upstream transforms only

Expected Outcome: The transform is successfully scheduled with its dependencies correctly configured and executed

Create a New Transform via Table Node

Context: A data analyst needs to quickly create a new transform directly from an existing table in the pipeline view.

User Goal: Create a new transform starting from a source table without having to manually configure the input source.

Step-by-Step Procedure:

  1. Access the Pipeline Editor from the menu list
  2. Locate and select the desired source table in the pipeline view
  3. Hover over the table node and click the "+" button to open the pre-populated "Create new transform" modal
  4. In the modal, modify the pre-filled information as needed (transform name, output table name, and group)
  5. Click the "Create" button to generate the new transform
  6. Verify that the new transform and output table node appear in the pipeline view
  7. Edit the SQL code if necessary and run the transform to confirm it works properly

Expected Outcome: A new transform is successfully created with the selected table automatically set as the source, saving time and reducing the potential for errors in specifying input sources.


Feature Requests

To suggest a new feature go into Resource Center, Share Your Feedback and add a suggestion.

379