Webscraping Dynamic Websites with R

Reading time:
time
min
By:
Ivan Millanes
September 1, 2022

In this post, you'll learn how to scrape dynamic websites in R using {RSelenium} and {rvest}. Although some basic knowledge of rvest, HTML, and CSS is required, I will explain basic concepts through the post. So even beginners will find some use in this tutorial for webscraping dynamic sites in R. You can do a lot with R these days. Discover these 6 essential R packages from <a href="https://appsilon.com/r-for-programmers/" target="_blank" rel="noopener">scraping webpages to training ML models</a>. TOC: <ul><li><a href="#static">Static vs Dynamic Web Pages</a></li><li><a href="#setup">Setup</a></li><li><a href="#basics">Basics of Webscraping in R</a></li><li><a href="#example">Example of Webscraping in R</a></li><li><a href="#tips">Tips for Working with RSelenium</a></li></ul> <hr /> <h2 id="static">Static vs Dynamic Web Pages</h2> Let's compare the following websites: <ul><li><a href="https://www.imdb.com/" target="_blank" rel="nofollow noopener">IMDB</a> - an internet movie database</li><li><a href="https://www.premierleague.com/stats/top/players/goals" target="_blank" rel="nofollow noopener">Premier League</a> - a site containing football (soccer) statistics and info</li></ul> On IMDB, if you search for a particular movie (e.g. <a href="https://www.imdb.com/title/tt0468569/" target="_blank" rel="nofollow noopener">The Dark Knight</a>), you can see that the URL changes and the URL is different from any other movie (e.g. <a href="https://www.imdb.com/title/tt2397535/" target="_blank" rel="nofollow noopener">Predestination</a>). On the other hand, if you go to <a href="https://www.premierleague.com/stats/top/players/goals" target="_blank" rel="nofollow noopener">Premier League Player Stats</a>, you'll notice that modifying the filters or clicking the pagination button to access more data doesn't produce changes to the URL. The first website is an example of a <strong>static</strong> web page, whereas the second is an example of a <strong>dynamic</strong> web page. <ul><li>Static Web Page: A web page (HTML page) that contains the same information for all users. Although it may be periodically updated, it does not change with each user retrieval.</li><li>Dynamic Web Page: A web page that provides custom content for the user based on the results of a search or some other request. Also known as "dynamic HTML" or "dynamic content", the "dynamic" term is used when referring to interactive Web pages created for each user.</li></ul> If you're looking to scrape data from static web pages - '<a href="https://rvest.tidyverse.org/articles/rvest.html" target="_blank" rel="nofollow noopener">rvest</a>' is a great tool. <blockquote>Thinking about a career in R and R Shiny? <a href="https://appsilon.com/how-to-start-a-career-as-an-r-shiny-developer/" target="_blank" rel="noopener">Here's everything you need to know to land your first R Developer job</a>.</blockquote> But when it comes to dynamic web pages, `rvest` alone can't get the job done. This is when `<a href="https://docs.ropensci.org/RSelenium/articles/basics.html" target="_blank" rel="nofollow noopener">RSelenium</a>` joins the party. <h2 id="setup">Setup for Webscraping Dynamic Websites in R</h2> <h3>R Packages</h3> Before we dive into the details and how to get started , make sure you install the following packages and load them so you can run the code written below: <pre><code> ```{r, message=FALSE, warning=FALSE} library(dplyr) library(stringr) library(purrr) library(rvest) library(RSelenium) ``` </code></pre> <h3>Java</h3> It's also important that you have Java installed. To check the installation, type <code>java -version</code> in your Command Prompt. If it throws an error, it means you don't have Java installed. You can <a href="https://java.com/en/download/" target="_blank" rel="nofollow noopener">download Java here</a>. <h3 id="selen">Start Selenium</h3> Use <a href="https://www.rdocumentation.org/packages/RSelenium/versions/1.7.7/topics/rsDriver" target="_blank" rel="nofollow noopener">rsDriver()</a> to start a Selenium server and browser. If not specified, browser = "chrome" and version = "latest" are the default values for those parameters. <pre><code> ```{r, eval=FALSE} rD &lt;- RSelenium::rsDriver() # This might throw an error ```</code></pre> The code above might throw an error that looks like this: <img class="alignnone size-full wp-image-15476" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d46e9613f3f72cb51e0_selenium_error.webp" alt="rselenium error" width="1435" height="607" /> You can explore <a href="https://stackoverflow.com/questions/55201226/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-7" target="_blank" rel="noopener">this StackOverflow post</a> explaining what the error is about. Basically, there is a mismatch between the Chrome driver and the Chrome browser versions. <a href="https://stackoverflow.com/questions/55201226/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-7/56173984#56173984" target="_blank" rel="nofollow noopener">This solution</a> is to set <code>'chromever'</code> parameter to the latest compatible Chrome driver version. I'll show you how to determine a proper value by manually checking versions. First, we need to identify what Chrome version we have. You can do that with the following code: <pre><code> ```{r eval=FALSE} # Get Chrome version system2(command = "wmic",        args = 'datafile where name="C:\\\\Program Files (x86)\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe" get Version /value') ``` </code></pre> If you run that code in the console, you should see a result that looks like this (note: your version may differ from the one shown here): <img class="alignnone size-full wp-image-15470" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d48ee976c4ea9903c56_chrome_version.webp" alt="r for webscraping - chrome version" width="1640" height="128" /> Now we have to list the available Chrome drivers: <pre><code> ```{r eval=FALSE} binman::list_versions(appname = "chromedriver") ``` </code></pre> <img class="alignnone size-full wp-image-15468" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d49965eadfe4147707b_chrome_drivers.webp" alt="r for webscraping - chrome drivers" width="1169" height="147" /> Each version of the Chrome driver supports Chrome with matching major, minor, and build version numbers. For example, Chrome driver `73.0.3683.20` supports all Chrome versions that start with `73.0.3683`. In our case, we could use either `103.0.5060.24` or `103.0.5060.53`. In the case that there is no Chrome driver matching the Chrome version you'll need to install it. The updated code looks like this: <pre><code> ```{r, eval=FALSE} # Start Selenium server and browser rD &lt;- RSelenium::rsDriver(browser = "chrome",                          chromever = "103.0.5060.24") # Assign the client to an object remDr &lt;- rD[["client"]] ``` </code></pre> Running <code>rD &lt;- RSelenium::rsDriver(...)</code> should open a new Chrome window. <img class="alignnone size-full wp-image-15480" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4a87923e0adc170c0a_open_selenium-opti.gif" alt="Open Selenium for r selenium webscraping" width="1889" height="1000" /> You can find more information about <code>rsDriver()</code> in the <a href="https://docs.ropensci.org/RSelenium/articles/basics.html#rsdriver-1" target="_blank" rel="nofollow noopener">Basics of Vignettes article</a>. <h2 id="basics">Basics of R and Webscraping</h2> In this section, I'll apply different methods to the <code>remDr</code> object created above. I'm only going to describe the methods that I think are most frequently used. For a complete reference, check the <a href="https://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf" target="_blank" rel="nofollow noopener">package documentation</a>. <ul><li><code>navigate(URL)</code>: Navigate to a given URL</li></ul> <pre><code>```{r, eval=FALSE} remDr$navigate("https://www.google.com/") remDr$navigate("https://www.nytimes.com/") <br># Use method without () to get a description of what it does remDr$navigate ``` </code></pre> <ul><li><code>goBack()</code>: Equivalent to hitting the back button on the browser</li><li><code>goForward()</code>: Equivalent to hitting the forward button on the browser</li></ul> <pre><code>```{r, eval=FALSE} remDr$goBack() remDr$goForward() ``` </code></pre> <ul><li><code>refresh()</code>: Reload the current page</li></ul> <pre><code>```{r, eval=FALSE} remDr$refresh() ``` </code></pre> <ul><li><code>getCurrentUrl()</code>: Retrieve the URL of the current page</li></ul> <pre><code>```{r, eval=FALSE} remDr$getCurrentUrl() ``` </code></pre> <ul><li><code>maxWindowSize()</code>: Set the size of the browser window to maximum. By default, the browser window size is small, and some elements of the website you navigate to might not be available right away (I'll talk more about this in the next section).</li></ul> <pre><code>```{r, eval=FALSE} remDr$maxWindowSize() ``` </code></pre> <ul><li><code>getPageSource()[[1]]</code>: Get the current page source. This method combined with `rvest` is what makes possible to scrape dynamic web pages. The xml document returned by the method can then be read using <code>rvest::read_html()</code>. This method returns a `list` object, that's the reason behind <code>[[1]]</code>.</li></ul> <pre><code>```{r, eval=FALSE} remDr$getPageSource()[[1]] ``` </code></pre> <ul><li><code>open(silent = FALSE)</code>: Send a request to the remote server to instantiate the browser. I use this method when the browser closes for some reason (for example, inactivity). If you have already started the Selenium server, you should run this instead of <code>rD &lt;- RSelenium::rsDriver(...)&gt;/code&gt; to re-open the browser.</code></li></ul> <pre><code>```{r, eval=FALSE} remDr$open() ``` </code></pre> <ul><li><code>close()</code>: Close the current session</li></ul> <pre><code>```{r, eval=FALSE} remDr$close() ```</code></pre> <h3>Working with Elements</h3><ul><li><code>findElement(using, value)</code>: Search for an element on the page, starting from the document root. The located element will be returned as an object of webElement class. To use this function you need some basic knowledge of HTML and CSS (or xpath, etc). Using a Chrome extension, called <a href="https://chrome.google.com/webstore/detail/selectorgadget/mhjhnkcfbdhnjickkkdbjoemdmbfginb?hl=es" target="_blank" rel="nofollow noopener">SelectorGadget</a>, might help.</li><li><code>highlightElement()</code>: Utility function to highlight current Element. This helps to check that you selected the wanted element.</li><li><code>sendKeysToElement()</code>: Send a sequence of keystrokes to an element. The keystrokes are sent as a list. Plain text is entered as an unnamed element of the list. Keyboard entries are defined in ‘selKeys‘ and should be listed with the name ‘key‘.</li><li><code>clearElement()</code>: Clear a TEXTAREA or text INPUT element’s value.</li><li><code>clickElement()</code>: Click the element. You can click links, check boxes, dropdown lists, etc.</li></ul> <h4>Example of Working with Elements in R</h4> To understand the following example, basic knowledge of CSS is required. <pre><code> ```{r, eval=FALSE} # Navigate to Google remDr$navigate("https://www.google.com/") <br># Find search box webElem &lt;- remDr$findElement(using = "css selector", value = ".gLFyf.gsfi") <br># Highlight to check that was correctly selected webElem$highlightElement() <br># Send search and press enter # Option 1 webElem$sendKeysToElement(list("the new york times")) webElem$sendKeysToElement(list(key = "enter")) # Option 2 webElem$sendKeysToElement(list("the new york times", key = "enter")) <br># Go back to Google remDr$goBack() <br># Search something else webElem$sendKeysToElement(list("finantial times")) <br># Clear element webElem$clearElement() <br># Search and click webElem &lt;- remDr$findElement(using = "css selector", value = ".gLFyf.gsfi") webElem$sendKeysToElement(list("the new york times", key = "enter")) webElem &lt;- remDr$findElement(using = "css selector", value = ".LC20lb.DKV0Md") webElem$clickElement() ``` </code></pre> <img class="alignnone size-full wp-image-15482" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4ba7c6956a1698d9b2_selenium_example-opti.gif" alt="Selenium example of webscraping with rselenium" width="1900" height="1000" /> <h3>Other Methods of Webscraping with R</h3> In this section, I'll list other methods that might be useful to you. For more information about each, be sure to explore the <a href="https://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf" target="_blank" rel="nofollow noopener">RSelenium documentation</a>. <pre><code> ```{r, eval=FALSE} remDr$getStatus() remDr$getTitle() remDr$screenshot() remDr$getWindowSize() remDr$setWindowSize(1000,800) remDr$getWindowPosition() remDr$setWindowPosition(100, 100) webElem$getElementLocation() ``` </code></pre> <h2 id="example">Example of Webscraping Premier League Player Goals with R</h2> In this example, I'll create a dataset using information stored on the Premier League Player stats page, which we discussed earlier. First, let's explore the site. <img class="alignnone size-full wp-image-15478" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4d7b599ca044810496_explore_website-opti.gif" alt="Exploring website before webscraping with R" width="1822" height="972" /> There are a couple of interesting things to point out: <ul><li>When we open the website, we are asked to accept cookies.</li><li>After we accept cookies, an ad opens which has to be closed.</li><li>As expected, selecting a different season and table pagination produce no changes in the URL.</li></ul> In our code, we'll have to include commands to navigate to the website, accept cookies, and close the ad. Note that the website might change in the future, so with time some modifications to the following code might be necessary. <h3>Target Dataset</h3> Our final dataset will contain the following variables: <ul><li><code>Player</code>: Indicates the player's name.</li><li><code>Nationality</code>: Indicates the nationality of the player.</li><li><code>Season</code>: Indicates the season the stats corresponds to.</li><li><code>Position</code>: Indicates the player's position in the season.</li><li><code>Goals</code>: Number of Goals scored by the player.</li></ul> For simplicity's sake, we'll scrape data from seasons 2017/18 and 2018/19. <h3>Before We Start</h3> In order to run the code below, we have to start a Selenium server and browser. And we'll need to create the <code>remDr</code> object. This step was described in the <a href="#selen">Start Selenium</a> section. <pre><code> ```{r, eval=FALSE} # Start Selenium server and browser rD &lt;- RSelenium::rsDriver(browser = "chrome",                          chromever = "103.0.5060.24") # You might have to change this value # Assign the client to an object remDr &lt;- rD[["client"]] ``` </code></pre> <h3>First Steps</h3> The code chunk below: <ul><li>Navigates to the website</li><li>Increases the window size (This action might be useful to show elements that might be hidden due to window size.)</li><li>Accepts cookies</li><li>Closes the add</li></ul> You might notice two things: <ul><li>The use of the <code>Sys.sleep()</code> function.</li></ul> This function is used to give the website enough time to load. Sometimes, if the element you want to find isn't loaded when you search for it, it will produce an error. <ul><li>The use of CSS selectors.</li></ul> To select an element using CSS you can press F12 and inspect the page source (right-clicking the element and selecting <code>Inspect</code> will show you which part of that code refers to the element). Or you can use a chrome extension, called <a href="https://chrome.google.com/webstore/detail/selectorgadget/mhjhnkcfbdhnjickkkdbjoemdmbfginb?hl=es" target="_blank" rel="noopener">SelectorGadget</a>. I recommend learning some HTML and CSS and using these two approaches simultaneously. SelectorGadget helps, but sometimes you will need to inspect the source to get exactly what you want. <pre><code> ```{r, eval=FALSE} # Navigate to the website remDr$navigate("https://www.premierleague.com/stats/top/players/goals") # Give some time to load Sys.sleep(4) # Increase window size to find elements remDr$maxWindowSize() # Accept cookies acceptCookies &lt;- remDr$findElement(using = "css selector",                                   value = ".js-accept-all-close") acceptCookies$clickElement() # Give some time to load Sys.sleep(2) # Close add closeAdd &lt;- remDr$findElement(using = "css selector",                              value = "#advertClose") closeAdd$clickElement() ``` </code></pre> In the next subsection, I'll show how I selected certain elements by inspecting the page source. <h3>Getting values to Iterate Over</h3> In order to get the data, we will have to iterate over different lists of values. In particular, we need a list of seasons and player positions. We can use `rvest` to scrape the website and get these lists. To do so, we need to find the corresponding nodes. As an example, after the code, I'll show where I searched for the required information in the page source for the seasons' lists. The code below uses `rvest` to create the lists we'll use in the loops. <pre><code> ```{r, eval=FALSE} # Read page source source &lt;- remDr$getPageSource()[[1]] # Get seasons list_seasons &lt;- read_html(source) %&gt;%  html_nodes("ul[data-dropdown-list=FOOTBALL_COMPSEASON] &gt; li") %&gt;%  html_attr("data-option-name") %&gt;%  .[-1] # Remove "All seasons" option # To make example simple season17 &lt;- which(list_seasons == "2017/18") season18 &lt;- which(list_seasons == "2018/19") list_seasons &lt;- list_seasons[c(season17, season18)] # Get positions list_positions &lt;- read_html(source) %&gt;%  html_nodes("ul[data-dropdown-list=Position] &gt; li") %&gt;%  html_attr("data-option-id") %&gt;%  .[-1] # Remove "All positions" option ``` </code></pre> <h4>Seasons</h4> This is my view when I open the seasons dropdown list and right-click and inspect the 2016/17 season: <img class="alignnone size-full wp-image-15472" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4d30ca449bf54ab0e2_css_seasons.webp" alt="css seasons for webscraping" width="1651" height="800" /> Taking a closer look at the source where that element is present we get: <img class="alignnone size-full wp-image-15474" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4f9506383404c3b83a_css_seasons_zoom.webp" alt="css seasons zoomed in for scraping with r selenium" width="583" height="314" /> As you can see, we have an attribute named 'data-dropdown-list' whose value is 'FOOTBALL_COMPSEASON' and inside we have 'li' tags where the attribute 'data-option-name' changes for each season. This will be useful when defining how to iterate using 'RSelenium.' <h3>Webscraping Loop in R</h3> This is an overview of the loop to get <code>Goals</code> data. <ul><li>Preallocate seasons vector. This list will have a length equal to the number of seasons to be scraped.</li><li>For each season:<ul><li>Click the seasons dropdown list</li><li>Click the corresponding season</li><li>Preallocate positions vector. This list will have `length = 4` (positions are fixed: GOALKEEPER, DEFENDER, MIDFIELDER, and FORWARD).</li><li>For each position inside the season<ul><li>Click the position dropdown list</li><li>Click the corresponding position</li><li>Check that there is a table with data (if not, go to next position)</li><li>Scrape the first table</li><li>While "Next Page" button exists<ul><li>Click "Next Page" button</li><li>Scrape new table</li><li>Append new table to table</li></ul> </li> <li>Go to the top of the website</li> </ul> </li> <li>Rowbind each position table</li> <li>Add season data</li> </ul> </li> <li>Rowbind each season table to create <code>Goals</code> datasetThe result of this loop is a `tibble`.</li> </ul> This is the code: <pre><code> ```{r, eval=FALSE} # Preallocate seasons vector data_seasons &lt;- vector("list", length(list_seasons)) # Note: DDL is short for DropDown List # Iterate over seasons for (j in seq_along(list_seasons)){    # Open seasons dropdown list  DDLseason &lt;- remDr$findElement(using = "css selector",                                 value = ".current[data-dropdown-current=FOOTBALL_COMPSEASON]")  DDLseason$clickElement()  Sys.sleep(2)    # Click corresponding season  ELEMseason &lt;- remDr$findElement(using = "css selector", value = str_c("ul[data-dropdown-list=FOOTBALL_COMPSEASON] &gt; li[data-option-name='", list_seasons[[j]], "']"))  ELEMseason$clickElement()  Sys.sleep(2)    # Preallocate positions vector  data_positions &lt;- vector("list", length(list_positions))    # Iterate over position  for (k in seq_along(list_positions)){    # Open positions dropdown list    DDLposition &lt;- remDr$findElement(using = "css selector",                                     value = ".current[data-dropdown-current=Position]")    DDLposition$clickElement()    Sys.sleep(2)        # Click corresponding position    ELEMposition &lt;- remDr$findElement(using = "css selector", value = str_c("ul[data-dropdown-list=Position] &gt; li[data-option-id='", list_positions[[k]], "']"))    ELEMposition$clickElement()    Sys.sleep(2)        # Check that there is a table to scrape. If there isn't, go to next position    check_table &lt;- remDr$getPageSource()[[1]] %&gt;%      read_html() %&gt;%      html_node(".statsTableContainer") %&gt;%      html_text()        if(check_table == "No stats are available for your search") next        # Populate element of corresponding position (first page)    data_positions[[k]] &lt;- remDr$getPageSource()[[1]] %&gt;%      read_html() %&gt;%      html_table() %&gt;%      .[[1]] %&gt;%      # Process was including a column without name which we need to remove.      #  To do so, we include the following lines of code.      as_tibble(.name_repair = "unique") %&gt;%      select(-ncol(.))        # Get tables from every page    btnNextExists &lt;- remDr$getPageSource()[[1]] %&gt;%      read_html() %&gt;%      html_node(".paginationNextContainer.inactive") %&gt;%      html_text() %&gt;%      is.na()        # While there is a Next button to click    while (btnNextExists){      # Click "Next"      btnNext &lt;- remDr$findElement(using = "css selector",                                   value = ".paginationNextContainer")      btnNext$clickElement()      Sys.sleep(2)            # Get table from new page      table_n &lt;- remDr$getPageSource()[[1]] %&gt;%        read_html() %&gt;%        html_table() %&gt;%        .[[1]] %&gt;%        # Process was including a column without name which we need to remove.        #  To do so, we include the following lines of code.        as_tibble(.name_repair = "unique") %&gt;%        select(-ncol(.))            # Rowbind existing table and new table      data_positions[[k]] &lt;- bind_rows(data_positions[[k]], table_n)            # Update Check for Next Button      btnNextExists &lt;- remDr$getPageSource()[[1]] %&gt;%        read_html() %&gt;%        html_node(".paginationNextContainer.inactive") %&gt;%        html_text() %&gt;%        is.na()            Sys.sleep(1)    }        # Data wrangling    data_positions[[k]] &lt;- data_positions[[k]] %&gt;%      rename(Goals = Stat) %&gt;%      mutate(Position = list_positions[[k]])        # Go to top of the page to select next position    goTop &lt;- remDr$findElement("css", "body")    goTop$sendKeysToElement(list(key = "home"))    Sys.sleep(3)  }    # Rowbind positions dataset  data_positions &lt;- reduce(data_positions, bind_rows)    # Populate corresponding season  data_seasons[[j]] &lt;- data_positions %&gt;%    mutate(Season = list_seasons[[j]]) } # Rowbind seasons dataset to create goals dataset data_goals &lt;- reduce(data_seasons, bind_rows) ``` </code></pre> This is how the scraping looks in action: <img class="alignnone size-full wp-image-15485" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01d4faa3d157843aec451_webscraping_website-opti.gif" alt="Webscraping in R with rselenium" width="1908" height="1008" /> <h3>Final Dataset</h3> After some data wrangling, this is how the final dataset looks: <pre><code> ```{r, eval=FALSE} dataset &lt;- data_goals %&gt;%  select(-c(Rank, Club)) %&gt;%  select(Season, Position, Player, Nationality, Goals) %&gt;%  arrange(Season, Position, Player, Nationality) ``` <br>```{r, echo=FALSE} dataset &lt;- readRDS("data/dataset.rds") ``` ```{r} dataset %&gt;%  head %&gt;%  knitr::kable(format = "html") ``` </code></pre> <h2 id="tips">Tips for Working with RSelenium for Webscraping</h2> In this section, I'll discuss general topics that might help you when working with `RSelenium.' I'll also cover how to troubleshoot some issues that I've experienced in the past. <h3>Parallel Framework</h3> The framework described here is an approach to working in `parallel` with `RSelenium`. This way you can open multiple browsers at the same time and speed up the scraping. Be careful though, because I have experienced issues such as browsers closing for no apparent reason while working in parallel. First, we load the libraries we need: <pre><code> ```{r, eval=FALSE} # Load libraries library(parallel) ``` </code></pre> We then determine the number of cores to use. In this example, I use four cores. <pre><code> ```{r, eval=FALSE} # Determine cores # Number of cores in your computer n_cores &lt;- detectCores() <br># It's recommended to always leave at least one core free # clust &lt;- makeCluster(n_cores - 1) <br># I decided to make an example using 4 cores. clust &lt;- makeCluster(4) ``` </code></pre> List the ports that are going to be used to start selenium: <pre><code> ```{r, eval=FALSE} # List ports ports = list(4567L, 4444L, 4445L, 5555L) ``` </code></pre> We use the `clusterApply()` to start Selenium on each core. Pay attention to the use of the<a href="https://mohit2152sharma.github.io/Data-Science-Question-A-Day/questions/23112019_31_R/23112019_31_R.html#:~:text=Answer,the%20variable%20in%20global%20environment" target="_blank" rel="nofollow noopener"> superassignment operator</a>. When you run this function, you will see that four chrome windows are opened. <pre><code> ```{r, eval=FALSE} # Open Selenium on each core, using one port per core. clusterApply(clust, ports, function(x){  # Here you load the libraries on each core  library(RSelenium)  library(dplyr) # Not needed for this particular example  library(rvest) # Not needed for this particular example    # Pay attention to the use of the superassignment operator.  rD &lt;&lt;- RSelenium::rsDriver(    browser = "chrome",    chromever = "103.0.5060.24",    port = x    )    # Pay attention to the use of the superassignment operator.  remDr &lt;&lt;- rD[["client"]] }) ``` </code></pre> This is an example of pages that we will open in parallel: <pre><code> ```{r, eval=FALSE} # List element to iterate with parallel processing pgs &lt;- list("https://www.google.com",            "https://www.nytimes.com",            "https://www.ft.com") ``` </code></pre> Use 'parLapply()' to work in parallel. When you run this, you'll see that each browser opens one website, and one remains blank. This is a simple example; I haven't defined any scraping, but we can! <pre><code> ```{r, eval=FALSE} # Define iteration parLapply(clust, pgs, function(x) {  remDr$navigate(x) }) ``` </code></pre> When you're done, stop Selenium on each core and stop the cluster. <pre><code> ```{r, eval=FALSE} # Define function to stop Selenium on each core close_rselenium &lt;- function(){  clusterEvalQ(clust, {    remDr$close()    rD$server$stop()  })  system("taskkill /im java.exe /f", intern=FALSE, ignore.stdout=FALSE) } ``` <br>```{r, eval=FALSE} # Close Selenium on each core close_rselenium() # Stop the cluster stopCluster(clust) ``` </code></pre> <h3>Browser Closing for No Reason</h3> Consider the following scenario: your loop navigates to a certain website, clicks some elements, and then gets the page source to scrape using `rvest.' If in the middle of that loop the browser closes, you will get an error (e.g., it won't navigate to the website or the element won't be found). You can work around these errors using `tryCatch()`, but when you skip the iteration where the error occurred when you try to navigate to the website in the following iteration, an error will occur again (because there is no browser open!). You could, for example, use <code>remDr$open()</code> at the beginning of the loop, and `remDr$close()` at the end, but that may open and close too many browsers and make the process slower. So I created this function that handles part of the problem. Even though the iteration where the browser is closed will not finish, the next one will. And the process won't stop. It basically tries to get the current URL using <code>remDr$getCurrentUrl()</code>. If no browser is open, this will throw an error, and if we get an error, it will open a browser. <pre><code> ```{r, eval=FALSE} check_chrome &lt;- function(){  check &lt;- try(suppressMessages(remDr$getCurrentUrl()), silent = TRUE)  if ("try-error" %in% class(check)) remDr$open(silent = TRUE) } ``` </code></pre> <h3>Closing Selenium (Port Already in Use)</h3> Sometimes, even if the browser window is closed when you re-run <code>rD &lt;- RSelenium::rsDriver(...)</code> you might encounter an error like: <pre><code> ``` Error in wdman::selenium(port = port, verbose = verbose, version = version,  :                          Selenium server signals port = 4567 is already in use. ``` </code></pre> This means the connection was not completely closed. You can execute the lines of code below to stop Selenium: <pre><code> ```{r, eval=FALSE} remDr$close() rD$server$stop() system("taskkill /im java.exe /f", intern=FALSE, ignore.stdout=FALSE) ``` </code></pre> You can check out<a href="https://stackoverflow.com/questions/43991498/rselenium-server-signals-port-is-already-in-use" target="_blank" rel="nofollow noopener"> this Stackoverflow post</a> for more info. <h3>Wrapper Functions</h3> You can create functions in order to type less. Suppose that you navigate to a certain website where you have to click one link that sends you to a site with different tabs. You can use something like this: <pre><code> ```{r} navigate_page &lt;- function(CSS_ID, CSS_TAB = NULL){  remDr$navigate("WEBSITE")  webElem &lt;- remDr$findElement(using = "css selector", CSS_ID)  webElem$clickElement()  if (!is.null(TAB)){    tab &lt;- remDr$findElement(using = "css selector", CSS_TAB)    tab$clickElement()  } } ``` </code></pre> You can also create functions to find elements, check if an element exists on the <a href="https://en.wikipedia.org/wiki/Document_Object_Model" target="_blank" rel="nofollow noopener">DOM (Document Object Model)</a>, try to click an element if it exists, parse the data table you are interested in, etc. You might find these <a href="https://stackoverflow.com/questions/50310595/data-scraping-in-r" target="_blank" rel="nofollow noopener">StackOverflow examples</a> helpful. <blockquote>Looking to create interactive Markdown documents? Explore <a href="https://appsilon.com/r-quarto-tutorial/" target="_blank" rel="noopener">R Quarto with our tutorial to get you started</a>.</blockquote> <h3>Resources for RSelenium Projects</h3> The following list contains different videos, posts, and StackOverflow posts that I found useful when learning and working with RSelenium. <ul><li>The ultimate online collection toolbox: Combining RSelenium and Rvest [<a href="https://www.youtube.com/watch?v=OxbvFiYxEzI&amp;t" target="_blank" rel="noopener">Part I</a>] and [<a href="https://www.youtube.com/watch?v=JcIeWiljQG4" target="_blank" rel="noopener">Part II</a>].</li></ul> If you know about `rvest` and just want to learn about `RSelenium`, I'd recommend watching Part II. It gives an overview of what you can do when combining `RSelenium` and `rvest`. It has decent, practical examples. As a final comment regarding these videos, I wouldn't pay too much attention to setting up Docker because you don't need to work that way in order to get `RSelenium` going. <ul><li>RSelenium Tutorial: A Tutorial to Basic Web Scraping With RSelenium [<a href="http://thatdatatho.com/2019/01/22/tutorial-web-scraping-rselenium/" target="_blank" rel="noopener">Link</a>].</li></ul> I found this post really useful when trying to set up `RSelenium`. The solution given in <a href="https://stackoverflow.com/questions/55201226/session-not-created-this-version-of-chromedriver-only-supports-chrome-version-7/56173984#56173984" target="_blank" rel="noopener">this StackOverflow post</a>, which is mentioned in the article, seems to be enough. <ul><li>Dungeons and Dragons Web Scraping with rvest and RSelenium [<a href="https://lmyint.github.io/post/dnd-scraping-rvest-rselenium/" target="_blank" rel="noopener">Link</a>].</li></ul> This is a great post! It starts with a general tutorial for scraping with `rvest` and then dives into `RSelenium`. If you are not familiar with `rvest`, you should start here. <ul><li>RSelenium Tutorial [<a href="http://joshuamccrain.com/tutorials/web_scraping_R_selenium.html" target="_blank" rel="noopener">Link</a>].</li><li>RSelenium Package Website [<a href="https://docs.ropensci.org/RSelenium/" target="_blank" rel="noopener">Link</a>].</li></ul> It has more advanced and detailed content. I just took a look at the <a href="https://docs.ropensci.org/RSelenium/articles/basics.html" target="_blank" rel="noopener">Basics vignette</a>. These StackOverflow posts helped me when working with dropdown lists: <ul><li>Rselenium - How to scrape all drop-down list option values [<a href="https://stackoverflow.com/questions/39949809/rselenium-how-to-scrape-all-drop-down-list-option-values" target="_blank" rel="noopener">Post</a>]</li><li>Dropdown boxes in RSelenium [<a href="https://stackoverflow.com/questions/26963927/dropdown-boxes-in-rselenium" target="_blank" rel="noopener">Post</a>]</li></ul> This post gives a solution to the "port already in use" problem: <ul><li>RSelenium: server signals port is already in use [<a href="https://stackoverflow.com/questions/43991498/rselenium-server-signals-port-is-already-in-use" target="_blank" rel="noopener">Post</a>].</li></ul> Even though is not marked as "best," the last line of code of the second answer is useful.

Have questions or insights?

Engage with experts, share ideas and take your data journey to the next level!

Is Your Software GxP Compliant?

Download a checklist designed for clinical managers in data departments to make sure that software meets requirements for FDA and EMA submissions.
Explore Possibilities

Share Your Data Goals with Us

From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.

Talk to our Experts
r
tutorial