6 Benefits of R {targets} for Data Science Workflows

Reading time:
time
min
By:
Alexandros Kouretsis
January 23, 2024

Efficiency is paramount in navigating the intricacies of <a href="https://appsilon.com/r-programming-vs-excel-for-business-workflow/" target="_blank" rel="noopener">data science workflows</a>, and multiple challenges can occur.

For example:
<ul>  <li>Working with large datasets</li>  <li>Data exist in a variety of formats</li>  <li>Different computing systems are involved</li>  <li>Different access methods to data</li></ul>
This post delves into the key considerations for crafting a robust and secure data science workflow.

See also my presentation at the Posit Conference.

<iframe title="YouTube video player" src="https://www.youtube.com/embed/PLKRd2pVFYA?si=rIhZVcKoSeNppq6m" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>
<h3>Table of Contents</h3><ul>  <li><strong><a href="#landscape-data-science">The Landscape of Data Science Workflows</a></strong></li>  <li><strong><a href="#glimpse-targets">A Glimpse into {targets}</a></strong></li>  <li><strong><a href="#dag-complexity">Directed Acyclic Graphs (DAGs) Simplify Complexity</a></strong></li>  <li><strong><a href="#scalability-cloud-storage">Scalability and Cloud Storage Integration</a></strong></li>  <li><strong><a href="#distributed-computing">Distributed Computing Made Simple</a></strong></li>  <li><strong><a href="#automation-reproducibility">Automation and Reproducibility</a></strong></li>  <li><strong><a href="#security-posit-connect">Security and Automation with Posit Connect</a></strong></li>  <li><strong><a href="#extensibility-resilience">Extensibility and Resilience</a></strong></li>  <li><strong><a href="#conclusion">Conclusion: Empowering Data Science Journeys</a></strong></li></ul>

<hr />

<img class="size-full wp-image-22973" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e65eac31ef8aa382702e_acc92ce3_data-science-workflow.webp" alt="Image of various technologies that might be involved in a data science workflow" width="1600" height="676" /> Image of various technologies that might be involved in a data science workflow[/caption]

Several fundamental ideas present in a data science workflow include:
<ul>  <li><strong>AI-Driven Data Collection and Machine Learning:</strong><ul>  <li>Leverage artificial intelligence for targeted data acquisition.</li>  <li>Use natural language processing algorithms for sentiment analysis on scraped data.</li>  <li>Implement robust mechanisms for monitoring and training machine learning models, including neural networks and AI systems.</li></ul>
</li>
 <li><strong>Advanced Statistical Computing and Data Imputation:</strong>
<ul>  <li>Implement sophisticated statistical methods, like Bayesian inference techniques, for predictive modelling in real-time scenarios.</li>  <li>Employ data imputation techniques to handle uncertainty in missing data scenarios.</li>  <li>Utilize parallel processing for large-scale statistical computations to enhance computational efficiency.</li>  <li>Explore ensemble methods and advanced model averaging techniques to improve the robustness of statistical models.</li></ul>
</li>
 <li><strong>Data Governance, Migration, and Optimization:</strong>
<ul>  <li>Incorporate robust data governance principles for ensuring data quality, integrity, and compliance.</li>  <li>Establish controls to track data lineage effectively.</li><li>Employ efficient data migration strategies for seamless transfers between storage systems.</li><li>Conduct precomputation steps to optimize downstream processes.</li></ul>
</li>
 <li><strong>Validation, Curation, and Monitoring:</strong>
<ul>  <li>Integrate validation processes seamlessly within the workflow.</li><li>Implement data curation practices to refine and enhance the quality of the dataset.</li><li>Implement monitoring mechanisms to ensure the ongoing health and integrity of the data.</li></ul>
</li>
</ul>
<h2 id="landscape-data-science">The Landscape of Data Science Workflows</h2>
In the ever-evolving landscape of data science, where vast datasets and intricate computations reign supreme, the quest for efficiency becomes paramount. Crafting a streamlined and effective data science workflow requires more than just tools; it demands a strategic approach. In this journey towards optimization, the tandem of R and the game-changing framework, {targets}, emerges as a powerful combination, unlocking new possibilities for reproducibility, scalability, and ease of management.

Data science workflows have grown exponentially in size and complexity, encompassing diverse computing systems and technologies. Navigating this intricate terrain demands careful planning and execution. Without a well-orchestrated approach, chaos may ensue, for example:
<ul>  <li>Inconsistent data formats and sources.</li><li>Unmanaged dependencies and library conflicts.</li><li>Poor documentation for code, models, and transformations.</li><li>Insufficient data governance leading to poor data quality.</li><li>Inefficiencies and bottlenecks as the workflow scales.</li>  <li>Inadequate security measures risking data breaches.</li></ul>
<h2 id="glimpse-targets">A Glimpse into {targets}</h2>
{targets} is a powerful function-oriented framework designed to streamline the intricacies of data science processes. It is an opinionated framework providing a structured way to build data science workflows. It introduces a simple yet powerful abstraction called "target" – <em>that is simply a function that outputs an object</em><em>.</em>

<img class="size-full wp-image-22977" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e65f33b1a67d75650e63_02ac2028_what-is-a-target.webp" alt="Image of a target, that is simply a function that outputs an object" width="688" height="467" />

Image of a target, that is simply a function that outputs an object

By convention, this object is always pushed to persistent storage. Let's delve into the key aspects that make R and {targets} a compelling choice for optimal data science workflows.

Here are some resources to get you started:
<ul>  <li><a href="https://books.ropensci.org/targets/" target="_blank" rel="noopener noreferrer">User manual</a></li>  <li><a href="https://appsilon.com/r-targets-reproducible-data-science-pipeline/" target="_blank" rel="noopener">Hands-on blogpost</a></li>  <li><a href="https://appsilon.github.io/data.validator/articles/targets_workflow.html" target="_blank" rel="noopener">Data validation workflow</a></li>  <li><a href="https://appsilon.com/shiny-apps-production-stability-testing-with-targets/" target="_blank" rel="noopener">Application stability testing</a></li></ul>
<h3 id="dag-complexity">Directed Acyclic Graphs (DAGs) Simplify Complexity</h3>
Ever found yourself needing to streamline your computational workflows, avoiding unnecessary and time-consuming steps, while still maintaining a clear record of your data processing sequence? {targets} seamlessly infers the Directed Acyclic Graphs (DAGs) of your workflow, strategically skipping unnecessary steps in the pipeline. This ensures that only relevant computations are executed, saving valuable time and resources. The elegance of DAGs shines as they orchestrate the flow of logic, making the workflow efficient and organized. This is also reflected in the development processes, transforming burdensome tasks into streamlined and productive workflows.

In the image below, we can see an example of a DAG where each node represents a specific function that operates on data or simply a target. It can be anything we can imagine, from reading a file to training a machine learning model.

<img class=" wp-image-22981" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e65f642b942e6898b04f_c96c7076_Directed-Acyclic-Graph.webp" alt="Directed Acyclic Graph" width="621" height="409" />

Directed Acyclic Graph

<strong>Example:</strong> Consider the transcripts per million genes as measured in human tissues. Computing correlation statistics between thousands of genes is a cumbersome task that can take hours or even days to finish. Having this precomputed will significantly optimize queries performed from downstream applications. If these computations are part of a bioinformatics workflow that is triggered daily, we want to skip these computations if we already know the results from the previous run.

{targets} takes responsibility for this by carefully storing metadata of the functions and objects involved in the pipeline so it automatically decides which parts of the workflow are invalidated. In the image below, we can consider the nodes [A, B, C, E] being involved with the cumbersome computations. Since the correlation statistics are known from prior runs, if the input remains unchanged, the workflow can be completed in minutes rather than hours, assuming that the other portion is connected to smaller calculations that are regularly invalidated.

<img class="wp-image-22983 " src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e66090bc3b2b3fcd717b_46f81a27_Directed-Acyclic-Graph-with-some-parts-invalidated.webp" alt="Directed Acyclic Graph with some parts invalidated" width="633" height="433" />

An illustration of a Directed Acyclic Graph with some parts invalidated
<h3 id="scalability-cloud-storage">Scalability and Cloud Storage Integration</h3>
Ever faced the challenge of managing vast amounts of data in the cloud, wrestling with scalability and version control intricacies? With cloud storage integration, {targets} extends its capabilities to cloud storage, allowing scalability to petabytes of data. By integrating with cloud storage solutions, data version control becomes a breeze. This not only enhances scalability but also provides a robust foundation for collaboration and result sharing among team members.

<img class="wp-image-22985 alignnone" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e661aa096a8e3907613f_a9d14d6d_cloud-storage.webp" alt="" width="233" height="244" />

Cloud storage integration is also a key element of hosting and automating a {targets} pipeline using cloud workers, like, for example, a scheduled report in <a href="https://posit.co/products/enterprise/connect/" target="_blank" rel="noopener noreferrer">Posit Connect</a> that drives the {targets} pipeline. By using a cloud worker, we have the following problem to solve: every time a run is completed, the worker's local filesystem is cleared of all results. To put it another way, unless you incorporate an external persistent storage system into the process, cloud workers are unable to recall the outcomes of earlier pipeline runs. {targets} provides this option by just declaring the storage path in the configuration of the pipeline. A common pattern is the usage of AWS S3 buckets that can be easily accessed from R using the {Paws} package and the settings of the {targets} pipeline:

<img class="wp-image-22987 alignnone" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e6616a34fa51cab1067c_151f54e3_paws.webp" alt="" width="779" height="276" />

{targets} will take responsibility for pushing all reproducibility evidence and objects computed in the pipeline to the remote file storage, and they can be easily loaded to inspect intermediate results.
<h3 id="distributed-computing">Distributed Computing Made Simple</h3>
Ever found yourself grappling with the complexities of distributed computing and parallel processing, yearning for a solution that simplifies the process? {targets} simplify parallel and distributed computing with easy configuration options. By setting the number of workers, you can harness the power of parallel processing and distribute computations across multiple nodes. The next target in the queue that can be computed will be assigned as soon as a worker becomes available.

This ensures that large-scale data processing becomes more manageable and time-efficient. It provides a variety of different backends for computing, with the latest addition being the {<a href="https://wlandau.github.io/crew/" target="_blank" rel="noopener noreferrer">crew</a>} package, that is also the default computing framework. Also, it allows the deployment of time-consuming jobs asynchronously to distributed systems—from cloud services to more traditional clusters and high-performance computing schedulers (<a href="https://slurm.schedmd.com/" target="_blank" rel="noopener noreferrer">SLURM</a>, SGE, LSF, and PBS/TORQUE).

Parallel workloads can be easily defined in the options settings of {targets}. For example, we can define two local workers as follows:

<img class="wp-image-22989 alignnone" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e662e7acb1f2ca2b4907_c0c9de0d_parallel-workloads.webp" alt="" width="745" height="334" />

Take note of how two distinct models are fitted to the same data in the example above. When workers are available, each model will be trained simultaneously.
<h3 id="automation-reproducibility">Automation and Reproducibility</h3>
Reproducibility and automation are at the core of {targets}. Scheduling reports, versioning, and inspecting results are made easy with a straightforward integration with Quarto and Rmarkdown. In the code code chunk below, we see a minimal example where a target pipeline is executed by using a Quarto document:

<img class=" wp-image-22991 alignnone" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e66390bc3b2b3fcd7534_46248641_target-quarto.webp" alt="" width="487" height="248" />

This level of automation enhances reproducibility by following a literate programming mentality, allowing you to track changes, revisit historical reports, and maintain a clear audit trail of your data science processes. You can trigger a {targets} pipeline in any hosting environment where an R-script can be executed and scheduled, <a href="https://docs.posit.co/connect/user/scheduling/" target="_blank" rel="noopener noreferrer">with Posit Connect being an excellent choice</a>.
<h3 id="security-posit-connect">Security and Automation with Posit Connect</h3>
Posit Connect is an excellent platform for hosting and automating <a href="https://www.youtube.com/watch?v=V82BBU9ldcM&amp;list=PL9HYL-VRX0oRsUB5AgNMQuKuHPpNDLBVt&amp;index=4" target="_blank" rel="noopener noreferrer">data science workflows</a>. The synergy between Quarto/Rmarkdown and {targets} extends further with seamless integration into Posit Connect. This not only ensures a secure environment for running processes but also allows for controlled access to specific users and a wide range of sharing and automation features. Posit Connect becomes a powerhouse for security and automation, complementing the capabilities of {targets}, bringing efficiency to the next level.

<img class="aligncenter wp-image-22993" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65e9e6636a34fa51cab108c6_14aac57f_security-and-automation-with-posit.webp" alt="" width="439" height="343" />
<h3 id="extensibility-resilience">Extensibility and Resilience</h3>
Building data science workflows with an open-source framework like R and {targets} offers great extensibility and resilience. Relying on the power of code, developers can craft their solutions, moulding the framework to accommodate their unique requirements.

Developers not only build upon the robust foundations of {targets}, but they can also introduce innovative functionalities, ensuring the adaptability of the framework to the ever-evolving landscape of data science. This coding freedom not only amplifies the extensibility of any workflow created as a {targets} project but also fortifies its resilience.

Unlike solutions tethered to specific vendors, a {targets} pipeline provides a flexible and sustainable open-source solution, empowering users to modify the source code effortlessly and extend its functionalities. This enhances the framework's resilience, making it a valuable asset for long-term projects.

See also <a href="https://wlandau.github.io/targetopia/" target="_blank" rel="noopener noreferrer">targetopia</a>, An R package ecosystem for democratized reproducible pipelines at scale.
<blockquote>Explore further how to elevate your data science and machine learning projects with R {targets} – s<a href="https://appsilon.com/r-targets-reproducible-data-science-pipeline/" target="_blank" rel="noopener">tart creating reproducible and efficient pipelines.</a></blockquote>
<h2 id="conclusion">Conclusion: Empowering Data Science Journeys</h2>
To sum up, the integration of R with {targets} enables data scientists and developers to set off on a path towards workflow efficiency. Being the newest and most sophisticated framework, {targets} makes things simpler, more scalable and ensures that outcomes are reproducible. {targets} assert that it will completely transform how we conceptualize and carry out data science projects.

Using R and {targets} to navigate the ever-changing data science world is a strategic step that offers a solid framework for effective, scalable, and cooperative projects. The future of data science workflows awaits, inviting those ready to elevate their approaches to join this transformative voyage. Additionally, the seamless integration with Posit Connect provides an added layer of collaboration and security, further enhancing the potential for streamlined and efficient data science endeavours.

See also my <a href="https://www.youtube.com/watch?v=PLKRd2pVFYA" target="_blank" rel="noopener">Posit Conference 2023 presentation</a>!
<blockquote>Excited about elevating your R and Shiny skills? Join us at the next Shiny Gathering for more insights and networking opportunities. <a href="https://www.shinyconf.com/shiny-gatherings" target="_blank" rel="noopener">Register now and be part of our innovative R community!</a></blockquote>
<h3>You May Also Like:</h3><ul>  <li><a href="https://appsilon.com/nextflow-for-computational-biology-workflows/" target="_blank" rel="noopener">Unlocking Efficiency in Computational Biology: How Nextflow Streamlines Workflow Management</a></li>  <li><a href="https://appsilon.com/r-programming-vs-excel-for-business-workflow/" target="_blank" rel="noopener">5 Ways R Programming and R Shiny Can Improve Your Business Workflows</a></li>  <li><a href="https://appsilon.com/remote-data-science-team-best-practices-scrum-github-and-docker/" target="_blank" rel="noopener">Remote Data Science Team Best Practices: Scrum, GitHub, Docker, and More</a></li></ul>

Have questions or insights?

Engage with experts, share ideas and take your data journey to the next level!

Is Your Software GxP Compliant?

Download a checklist designed for clinical managers in data departments to make sure that software meets requirements for FDA and EMA submissions.
Explore Possibilities

Share Your Data Goals with Us

From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.

Talk to our Experts
r
shiny