Professional Documents
Culture Documents
R Packages - RStudio
R Packages - RStudio
R Packages - RStudio
30/07/2016
Products
Resources
Pricing
About Us
Blog
knitr is an elegant,
flexible and fast dynamic
report generation that
combines R with TeX,
Markdown, or HTML.
ggplot 2 is an enhanced
data visualization
package for R. Create
stunning multi-layered
graphics with ease.
lubridate is an R package
that makes it easier to
work with dates and
times. The link will bring
you to a concise tour of
some of the things
lubridate can do for you.
Project Link
magrittr provides a
mechanism for chaining
commands with a new
forward-pipe operator,
%>%.
packrat is a dependency
management tool for R
to make your R projects
more isolated, portable,
and reproducible.
Haven
Leaflet
DT
https://www.rstudio.com/products/rpackages/
1 / 41
R Packages RStudio
30/07/2016
roxygen2
testthat
htmlwidgets
shinydashboards
shinydashboard makes it
easy to use Shiny to create
dashboards
Project Site Link
https://www.rstudio.com/products/rpackages/
DMCA
Trademark
Support
ECCN
2 / 41
30/07/2016
DT
Options
Functions
Server-side Processing
Extensions
Plug-ins
Shiny
Please use Github issues if you want to file bug reports or feature requests, and you may use StackOverflow or the shiny-discuss mailing
list to ask questions.
1 Usage
The main function in this package is datatable(). It creates an HTML widget to display R data objects with DataTables.
datatable(data, options = list(), class = "display", callback = JS("return table;"),
Show 10
entries
Search:
1.4
1.4
1.3
1.5
1.4
1.7
1.4
1.5
1.4
1.5
0.2
0.2
0.2
0.2
0.2
0.4
0.3
0.2
0.2
0.1
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
2 Arguments
If you are familiar with DataTables already, you may use the options argument to customize the table. See the page Options for details.
Here we explain the rest of the arguments of the datatable() function.
Show 10
entries
Search:
1.4
1.4
1.3
1.5
1.4
1.7
0.2
0.2
0.2
0.2
0.2
0.4
setosa
setosa
setosa
setosa
setosa
setosa
2.2 Styling
Currently, DT only supports the Bootstrap style besides the default style. You can use the argument style = 'bootstrap' to enable the
Bootstrap style, and adjust the table classes accordingly using Bootstrap table class names, such as table-stripe and table-hover. Actually,
DT will automatically adjust the class names even if you provided the DataTables class names such as stripe and hover.
DT:::DT2BSClass('display')## [1] "table table-striped table-hover"DT:::DT2BSClass(c('compact', 'cell-border'))## [1] "table table-condensed table-bordered"
Note you can only use one style for all tables on one page. Please see this separate page for examples using the Bootstrap style.
Show 10
entries
Search:
160
160
108
258
360
225
1
1
1
0
0
0
4
4
4
3
3
3
4
4
1
1
2
1
Show 10
entries
Search:
1
1
1
0
0
0
4
4
4
3
3
3
4
4
1
1
2
1
Show 10
entries
Search:
n - 1 as the index of the n-th column in the original data if you do not display row names;
https://rstudio.github.io/DT/
3 / 41
dataset
30/07/2016
n as the index of the n-th column in the original data if you want to display row names, because the original index is n - 1 in
JavaScript but we added the row names as the first column, and (n - 1) + 1 = n;
It is very important to remember this when using DataTables options.
Show 10
entries
Search:
Show 10
entries
Search:
1.4
1.4
1.3
1.5
1.4
1.7
0.2
0.2
0.2
0.2
0.2
0.4
setosa
setosa
setosa
setosa
setosa
setosa
Show 10
entries
Search:
1.4
1.4
1.3
1.5
1.4
1.7
0.2
0.2
0.2
0.2
0.2
0.4
setosa
setosa
setosa
setosa
setosa
setosa
When you display row names of the data, its column name will be a white space by default. That is why you cannot see its column name.
You can certainly choose to use a column name for rownames as well, e.g.
# change the first column name to 'ID'datatable(head(iris), colnames = c('ID' = 1))
Show 10
entries
Search:
1.4
1.4
1.3
1.5
1.4
1.7
0.2
0.2
0.2
0.2
0.2
0.4
setosa
setosa
setosa
setosa
setosa
setosa
<th rowspan="2">Species</th>
th(rowspan = 2, 'Species'),
<th colspan="2">Sepal</th>
th(colspan = 2, 'Sepal'),
th(colspan = 2, 'Petal') ),
<th>Length</th>
tr(
<th>Width</th>
<th>Length</th>
# use rownames = FALSE here because we did not generate a cell for row names in# the header, and the header only contains five columnsdatatable(iris[1:20, c(5, 1:4)], container = sketch, rownames = FALSE)
Show 10
entries
Search:
Sepal
Petal
Species
Length Width Length Width
setosa 5.1
3.5 1.4
setosa 4.9
3
1.4
setosa 4.7
3.2 1.3
setosa 4.6
3.1 1.5
setosa 5
3.6 1.4
setosa 5.4
3.9 1.7
setosa 4.6
3.4 1.4
setosa 5
3.4 1.5
setosa 4.4
2.9 1.4
setosa 4.9
3.1 1.5
Showing 1 to 10 of 20 entries
Previous12Next
0.2
0.2
0.2
0.2
0.2
0.4
0.3
0.2
0.2
0.1
You can also add a footer to the table container, and here is an example:
# a custom table with both header and footersketch = htmltools::withTags(table( tableHeader(iris), tableFooter(iris)))print(sketch)
<table> <thead> <tr>
<th>Sepal.Length</th>
<th>Sepal.Width</th>
<th>Petal.Length</th>
<th>Petal.Width</th>
<th>Sepal.Length</th>
<th>Sepal.Width</th>
<th>Petal.Length</th>
<th>Petal.Width</th>
datatable( head(iris, 10), container = sketch, options = list(pageLength = 5, dom = 'tip'), rownames = FALSE)
3.5
3
3.2
3.1
3.6
1.4
1.4
1.3
1.5
1.4
0.2
0.2
0.2
0.2
0.2
setosa
setosa
setosa
setosa
setosa
Show 10
entries
Search:
Table 1: This is a simple caption for the table.
1.4
1.4
1.3
1.5
1.4
1.7
0.2
0.2
0.2
0.2
0.2
0.4
setosa
setosa
setosa
setosa
setosa
setosa
# display the caption at the bottom, and <em> the captiondatatable( head(iris), caption = htmltools::tags$caption( style = 'caption-side: bottom; text-align: center;',
Show 10
entries
Search:
https://rstudio.github.io/DT/
4 / 41
30/07/2016
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1
2 4.9
3 4.7
4 4.6
55
6 5.4
3.5
1.4
0.2
3
1.4
0.2
3.2
1.3
0.2
3.1
1.5
0.2
3.6
1.4
0.2
3.9
1.7
0.4
Table 2: This is a simple caption for the table.
Showing 1 to 6 of 6 entries
Previous1Next
setosa
setosa
setosa
setosa
setosa
setosa
Show 5
entries
Search:
Sepal.Length
All
1 5.1
2 4.9
3 4.7
4 4.6
55
Showing 1 to 5 of 30 entries
Previous123456Next
Sepal.Width
Petal.Length
Petal.Width
Species
All
All
All
All
3.5
3
3.2
3.1
3.6
1.4
1.4
1.3
1.5
1.4
0.2
0.2
0.2
0.2
0.2
setosa
setosa
setosa
setosa
setosa
Depending on the type of a column, the filter control can be different. Initially, you see search boxes for all columns. When you click the
search boxes, you may see different controls:
For numeric/date/time columns, range sliders are used to filter rows within ranges;
For factor columns, selectize inputs are used to display all possible categories, and you can select multiple categories there (note
you can also type in the box to search in all categories);
For character columns, ordinary search boxes are used to match the values you typed in the boxes;
When you leave the initial search boxes, the controls will be hidden and the filtering values (if there are any) are stored in the boxes:
For numeric/date/time columns, the values displayed in the boxes are of the form low ... high;
For factor columns, the values are serialized as a JSON array of the form ["value1", "value2", "value3"];
When a column is filtered, there will be a clear button in its search box, and you can click the button to clear the filter. If you do not want to
use the controls, you can actually type in the search boxes directly, e.g. you may type 2 ... 5 to filter a numeric column, and the range of its
slider will automatically adjusted to [2, 5]. In case you find a search box too narrow and it is difficult to read the values in it, you may mouse
over the box and its values will be displayed as a tooltip. See this example for how to hide the clear buttons, and use plain text input styles
instead of Bootstrap.
Below is a simple example to demonstrate filters for character, date, and time columns:
d = data.frame( names = rownames(mtcars), date = as.Date('2015-03-23') + 1:32, time = as.POSIXct('2015-03-23 12:00:00', tz = 'UTC') + (1:32) * 5000, stringsAsFactors = FALSE)str(d)## 'data.frame':
Show 5
32 obs. of 3 variables:## $ names: chr "Mazda RX4" "Mazda RX4 Wag" "Datsun 710"
entries
Search:
names
1 Mazda RX4
2 Mazda RX4 Wag
3 Datsun 710
4 Hornet 4 Drive
5 Hornet Sportabout
date
time
2015-03-24
2015-03-25
2015-03-26
2015-03-27
2015-03-28
2015-03-23T13:23:20Z
2015-03-23T14:46:40Z
2015-03-23T16:10:00Z
2015-03-23T17:33:20Z
2015-03-23T18:56:40Z
All
All
Showing 1 to 5 of 32 entries
Previous1234567Next
All
Filtering in the above examples was done on the client side (using JavaScript in your web browser). Column filters also work in the
server-side processing mode, in which case filtering will be processed on the server, and there may be some subtle differences
(e.g. JavaScript regular expressions are different with R). See here for an example of column filters working on the server side.
Known Issues of Column Filters
The position of column filters may be off when scrolling is enabled in the table, e.g. via the options scrollX and/or scrollY. The appearance
may be affected by Shiny sliders, as reported in #49.
Show 10
entries
Search:
1.5
1.6
1.4
1.1
1.2
1.5
1.3
1.4
1.7
1.5
0.2
0.2
0.1
0.1
0.2
0.4
0.4
0.3
0.3
0.3
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
In the above example, the actual callback function on the JavaScript side is this (callback is only the body of the function):
function(table) { table.page("next").draw(false);}
After we initialize the table via the .DataTable() method in DataTables, the DataTables instance is passed to this callback function. Below
are a few more examples:
Show extra information in child rows
Please note this callback argument is only an argument of the datatable() function, and do not confuse it with the callbacks in the DataTables
options. The purpose of this argument is to allow users to manipulate the DataTables object after its creation.
Show 10
entries
Search:
<em>Column 2</em>
<a href="http://rstudio.com">RStudio</a>
<a href="#" onclick="alert('Hello World');">Hello</a>
Show 10
entries
Search:
Column 1 Column 2
Bold
RStudio
Emphasize Hello
Showing 1 to 2 of 2 entries
Previous1Next
Besides TRUE and FALSE, you can also specify which columns you want to escape, e.g.
datatable(m, escape = 1) # escape the first columndatatable(m, escape = 2) # escape the second columndatatable(m, escape = c(TRUE, FALSE)) # escape the first columncolnames(m) = c('V1', 'V2')datatable(m, escape = 'V1')
https://rstudio.github.io/DT/
5 / 41
Introduction to roxygen2
30/07/2016
Introduction to roxygen2
Hadley Wickham
2015-11-10
Documentation is one of the most important aspects of good code. Without it, users wont know how to use your
package, and are unlikely to do so. Documentation is also useful for you in the future (so you remember what
the heck you were thinking!), and for other developers working on your package. The goal of roxygen2 is to
make documenting your code as easy as possible. R provides a standard way of documenting packages: you
write .Rd files in the man/ directory. These files use a custom syntax, loosely based on latex. Roxygen2 provides
a number of advantages over writing .Rd files by hand:
Code and documentation are adjacent so when you modify your code, its easy to remember that you
need to update the documentation.
Roxygen2 dynamically inspects the objects that its documenting, so it can automatically add data that
youd otherwise have to write by hand.
It abstracts over the differences in documenting S3 and S4 methods, generics and classes so you need
to learn fewer details.
As well as generating .Rd files, roxygen will also create a NAMESPACE for you, and will manage the Collate field in
DESCRIPTION.
This vignette provides a high-level description of roxygen2 and how the three main components work. The
other vignettes provide more detail on the individual components:
Generating .Rd files and text formatting describe how to generate function documentation via .Rd files
Managing your NAMESPACE describes how to generate a NAMESPACE file, how namespacing works in R, and
how you can use Roxygen2 to be specific about what your package needs and supplies.
Controlling collation order describes how roxygen2 controls file loading order if you need to make sure
one file is loaded before another.
Running roxygen
There are three main ways to run roxygen:
roxygen2::roxygenise(), or
devtools::document(), if youre using devtools, or
Ctrl + Shift + D, if youre using RStudio.
As of version 4.0.0, roxygen2 will never overwrite a file it didnt create. It does this by labelling every file it
creates with a comment: Generated by roxygen2 (version): do not edit by hand.
https://cran.r-project.org/web/packages/roxygen2/vignettes/roxygen2.html
6 / 41
Personal
Open source
30/07/2016
Business
Explore
Pricing
Blog
Support
This repository
hadley / testthat
Code
Search
W a tch
Issues 2 4
Pull requests 7
Pulse
32
Sign up
Sign in
Sta r
332
F o rk
147
Graphs
ma ster
11 branches
14 releases
N ew p u ll req u est
51 contributors
F in d file
C lo n e o r d o w n lo a d
22 days ago
inst
man
revdep
3 months ago
src
missing header
3 months ago
tests
25 days ago
.Rbuildignore
27 days ago
.gitattributes
update .gitattributes
9 months ago
.gitignore
5 months ago
.travis.yml
2 months ago
DESCRIPTION
LICENSE
NAMESPACE
NEWS.md
README.md
appveyor.yml
2 years ago
codecov.yml
27 days ago
cran-comments.md
testthat.Rproj
2 months ago
22 days ago
25 days ago
4 months ago
25 days ago
22 days ago
4 months ago
3 months ago
3 years ago
README.md
testthat
build passing
Testing your code is normally painful and boring. testthat tries to make testing as fun as possible, so that you get a
visceral satisfaction from writing tests. Testing should be fun, not a drag, so you do it all the time. To make that happen,
testthat :
Provides functions that make it easy to describe what you expect afunction to do, including catching errors, warnings and
messages.
Easily integrates in your existing workflow, whether it's informal testingon the command line, building test suites or using
R CMD check.
Can re-run tests automatically as you change your code or tests.
Displays test progress visually, showing a pass, fail or error for everyexpectation. If you're using the terminal, it'll even
colour the output.
testthat draws inspiration from the xUnit family of testing packages, as well from many of the innovative ruby testing
libraries, like rspec, testy, bacon and cucumber. I have used what I think works for R, and abandoned what doesn't, creating a
testing environment that is philosophically centred in R.
Instructions for using this package can be found in the Testing chapter of R packages.
https://github.com/hadley/testthat
7 / 41
https://github.com/hadley/testthat
30/07/2016
Contact GitHub API Training Shop Blog About
8 / 41
htmlwidgets for R
30/07/2016
htmlwidgets for R
Home
Showcase
Develop
Gallery
GitHub
At the R console
In R Markdown docs
In Shiny apps
Widgets in action
Just a line or two of R code can be used to create interactive visualizations. See the
featured widgets in the showcase and browse over 50 available widgets in the gallery.
Creating widgets
Learn how to create an R binding for your favorite JavaScript library and enable use of it
in the R console, in R Markdown documents, and in Shiny web applications.
Develop a widget
Copyright 2014 - 2015 Ramnath Vaidyanathan, Kenton Russell, and RStudio, Inc.
http://www.htmlwidgets.org/
9 / 41
Shiny Dashboard
30/07/2016
shinydashboard
Home
Get started
Structure
Appearance
Examples
GitHub
Get started
http://rstudio.github.io/shinydashboard/index.html
10 / 41
30/07/2016
0.13.2
R ( 3.0.0), methods
utils, httpuv ( 1.3.3), mime ( 0.3), jsonlite ( 0.9.16), xtable, digest, htmltools ( 0.3), R6 ( 2.0)
datasets, Cairo ( 1.5-5), testthat, knitr ( 1.6), markdown, rmarkdown, ggplot2
2016-03-28
Winston Chang [aut, cre],Joe Cheng [aut],JJ Allaire [aut],Yihui Xie [aut],Jonathan McPherson [aut],RStudio [cph],jQuery Foundation [cph] (jQuery library and jQuery UI library),jQuery
contributors [ctb, cph] (jQuery library; authors listed ininst/www/shared/jquery-AUTHORS.txt),jQuery UI contributors [ctb, cph] (jQuery UI library; authors listed
ininst/www/shared/jqueryui/1.10.4/AUTHORS.txt),Mark Otto [ctb] (Bootstrap library),Jacob Thornton [ctb] (Bootstrap library),Bootstrap contributors [ctb] (Bootstrap library),Twitter, Inc
[cph] (Bootstrap library),Alexander Farkas [ctb, cph] (html5shiv library),Scott Jehl [ctb, cph] (Respond.js library),Stefan Petre [ctb, cph] (Bootstrap-datepicker library),Andrew Rowls
[ctb, cph] (Bootstrap-datepicker library),Dave Gandy [ctb, cph] (Font-Awesome font),Brian Reavis [ctb, cph] (selectize.js library),Kristopher Michael Kowal [ctb, cph] (es5-shim
library),es5-shim contributors [ctb, cph] (es5-shim library),Denis Ineshin [ctb, cph] (ion.rangeSlider library),Sami Samhuri [ctb, cph] (Javascript strftime library),SpryMedia Limited
[ctb, cph] (DataTables library),John Fraser [ctb, cph] (showdown.js library),John Gruber [ctb, cph] (showdown.js library),Ivan Sagalaev [ctb, cph] (highlight.js library),R Core Team [ctb,
cph] (tar implementation from R)
Maintainer:
Winston Chang <winston at rstudio.com>
BugReports:
https://github.com/rstudio/shiny/issues
License:
GPL-3 | file LICENSE
URL:
http://shiny.rstudio.com
NeedsCompilation: no
Materials:
README NEWS
In views:
WebTechnologies
CRAN checks:
shiny results
Downloads:
Reference manual:
shiny.pdf
Vignettes:
JavaScript Events in Shiny
Package source:
shiny_0.13.2.tar.gz
Windows binaries:
r-devel: shiny_0.13.2.zip, r-release: shiny_0.13.2.zip, r-oldrel: shiny_0.13.2.zip
OS X Mavericks binaries: r-release: shiny_0.13.2.tgz, r-oldrel: shiny_0.13.2.tgz
Old sources:
shiny archive
Reverse dependencies:
Reverse depends: bde, CLME, ECharts2Shiny, edgar, EmiStatR, EMMAgeo, enviPick, EurosarcBayes, Factoshiny, gmDatabase, gwdegree, ifaTools, meta4diag, mirtCAT, paramGUI, plotSEMM,
quipu, RJafroc, sglr, shinystan, Sofi, sparkTable, statnetWeb, SubVis
Reverse imports: AdaptGauss, addinslist, adegenet, adespatial, AFM, backtestGraphics, BayesianNetwork, BBEST, capm, ChannelAttributionApp, chipPCR, Cite, cosinor, CosmoPhotoz, crawl,
CTTShiny, datacheck, ddpcr, detzrcr, distcomp, diveRsity, dpcR, dropR, DVHmetrics, DynNom, eAnalytics, edgebundleR, eechidna, EffectLiteR, evobiR, explor, flexdashboard, flora,
FreqProf, G2Sd, gazepath, ggExtra, ggraptR, ggThemeAssist, ggvis, HH, igraphinshiny, IMP, IncucyteDRC, interAdapt, irtDemo, IRTShiny, lavaan.shiny, learnstats, lightsout, MAVIS,
merTools, miniUI, mldr, mlr, mplot, mwaved, NNTbiomarker, npregfast, OpenImageR, pairsD3, plotROC, poppr, pqantimalarials, QCAGUI, questionr, refund.shiny, ReporteRs,
rglwidget, rgpui, RLumShiny, rtable, RtutoR, SciencesPo, SensMixed, SHELF, shinyAce, shinybootstrap2, shinyBS, shinydashboard, shinyDND, shinyFiles, ShinyItemAnalysis,
shinyjs, shinyRGL, shinythemes, shinyTime, shinytoastr, shinyTree, signalHsmm, simPATHy, soc.ca, SOMbrero, SpaDES, squid, SSDM, StereoMorph, subspaceMOA, swirlify,
timeseriesdb, timevis, treemap, treescape, trelliscope, VWPre, wppExplorer
Reverse suggests: ahp, archivist, backpipe, beanz, benchmarkme, benchmarkmeData, bigQueryR, bookdown, compareGroups, condvis, covr, d3heatmap, diffr, DT, eemR, embryogrowth, EpiModel,
fanplot, formatR, formattable, geneSLOPE, ggiraph, googleAnalyticsR, googleAuthR, googleVis, idem, ImportExport, koRpus, LDAvis, leaflet, likert, listviewer, metricsgraphics, mirt,
mlxR, nbc4va, phenology, pipe.design, pitchRx, plotly, polmineR, qrage, radarchart, rAmCharts, rangeMapper, repo, RGA, rhandsontable, rivr, rmarkdown, RQuantLib, RxODE,
sadists, SDEFSR, sdm, searchConsoleR, seasonal, shotGroups, synthACS, tabplot, tigerstats, timeline, VineCopula, webshot, weightr
Reverse enhances: dygraphs, htmlwidgets, JMbayes, networkD3, PivotalR, rbokeh, rpivotTable, scatterD3, threejs, wordcloud2
https://cran.r-project.org/web/packages/shiny/index.html
11 / 41
knitr: Elegant, flexible and fast dynamic report generation with R | knitr
Home
Objects
30/07/2016
Options
Hooks
Patterns
Demos
knitr
Elegant, flexible and fast
dynamic report generation with R
Overview
The knitr package was designed to be a transparent engine for dynamicreport generation with
R, solve some long-standing problems in Sweave, andcombine features in other add-on
packages into one package (knitr Sweave + cacheSweave + pgfSweave + weaver +
animation::saveLatex +R2HTML::RweaveHTML + highlight::HighlightWeaveLatex + 0.2 * brew + 0.1
*SweaveListingUtils + more).
Transparency means that the user has full access to
every piece of theinput and output, e.g., 1 + 2 produces
[1] 3 in an R terminal, and knitr can let the user decide
whether to put 1 + 2 between\begin{verbatim} and
\end{verbatim}, or <div class="rsource"> and</div>,
and put [1] 3 in \begin{Routput} and \end{Routput};
seethe hooks page for details
knitr tries to be consistent with users expections by
running R code asif it were pasted in an R terminal, e.g.,
qplot(x, y) directly producesthe plot (no need to print()
it), and all the plots in a code chunkwill be written to the
output by default
Packages like pgfSweave and cacheSweave have
added useful features toSweave (high-quality tikz
graphics and cache), and knitr hassimplified the
implementations
The design of knitr allows any input languages (e.g. R,
Python and awk)and any output markup languages (e.g. LaTeX, HTML, Markdown, AsciiDoc,
andreStructuredText)
This package is developed on GitHub; forinstallation instructions and FAQs, seeREADME. This
website serves as thefull documentation of knitr, and you can find the mainmanual, the
graphics manualand other demos /examples here. For a moreorganized reference, see the knitr
book.
Your browser does not support the video tag.
Motivation
One of the difficulties with extending Sweave is we have to copy a largeamount of code from the
utils package (the file SweaveDrivers.R hasmore than 700 lines of R code), and this is what the two
packages mentionedabove have done. Once the code is copied, the package authors have to
payclose attention to what is changing in the version in official R apparently an extra burden.
The knitr package tried to modularize thewhole process of weaving a document into small
manageable functions, so itis hopefully easier to maintain and extend (e.g. easy to support
HTMLoutput); on the other hand, knitr has many built-in features and itshould not be the case
to have to hack at the core components of thispackage. By the way, several FAQs in the Sweave
manual are solved inknitr directly.
http://yihui.name/knitr/
12 / 41
knitr: Elegant, flexible and fast dynamic report generation with R | knitr
30/07/2016
Features
The ideas are borrowed from other packages, and some of them arere-implemented in a
different way (like cache). A selected list of featuresinclude:
faithful output: usingevaluate as the backendto evaluate R code, knitr writes everything that you
see in an Rterminal into the output by default, including printed results, plots andeven warnings,
messages as well as errors (they should not be ignored inserious computations, especially warnings)
a minor issue is that for grid-based graphics packages like ggplot2 orlattice, users often
forget to print() the plot objects, becausethey can get the output in an R terminal without
really print()ing; inknitr, what you get is what you expected
built-in cache: ideas like cacheSweave but knitr directly uses base Rfunctions to fulfill cache and
lazy loading, and another significantdifference is that a cached chunk can still have output (in
cacheSweave,cached chunks no longer have any output, even you explicitly print()an object; knitr
actually caches the chunk output as well)
formatting R code: the formatRpackage is used to reformat R code automatically (wrap long lines,
addspaces and indent, etc), without sacrificing comments askeep.source=FALSE does
more than 20 graphics devices are directly supported: with dev='CairoPNG'in the chunk options, you
can switch to the CairoPNG() device inCairo in a second; withdev='tikz', the tikz() device
intikzDevice is used;Could anything be easier than that? These built-in devices (strictly
speaking,wrappers) use inches as units, even for bitmap devices (pixels areconverted to inches by the
option dpi, which defaults to 72)
even more flexibility on graphics:
width and height in the output document of plots can be additionallyspecified (the
fig.width option is for the graphics device, andout.width is for the output document; think
out.width='.8\\textwidth')
locations of plots can be rearranged: they can either appear exactly in theplace where
they are created, or go to the end of a chunk together(option fig.show='hold')
multiple plots per code chunk are recorded, unless you really want to keepthe last plot
only (option fig.keep='last')
R code not only can come from code chunks in the input document, but also maybe from an
external R script, which makes it easier to run the code as youwrite the document (this will especially
benefit LyX)
for power users, further customization is still possible:
the regular expressions to parse R code can be defined, i.e., you do nothave to use <<>>=
and @ or \Sexpr{}; if you like, you can use anypatterns, e.g., %% begin.rcode and %% end.rcode
hooks can be defined to control the output; e.g. you may want to put errorsin red bold
texts, or you want the source code to be italic, etc; hookscan also be defined to be executed
before or after a code chunk, andthere are infinite possibilities to extend the power of this
package byhooks (e.g. animations, rgl 3D plots, )
Lots of efforts have been made to producing beautiful output and enhancingreadability by
default. For example, code chunks are highlighted and put ina shaded environment in LaTeX
with a very light gray background (theframed package), so they can stand outa little bit from
other texts. The reading experience is hopefully betterthan the verbatim or Verbatim
environments. The leading characters >and + (called prompts) in the output are not added by
default (you canbring them back by prompt=TRUE, though). I find them really annoying inthe
output when I read the output document, because it is so veryinconvenient to copy and run the
code which is messed up by these characters.
Acknowledgements
I thank the authors of Sweave, pgfSweave, cacheSweave, brew, decumar,R2HTML, tikzDevice,
highlight, digest, evaluate, roxygen2 and of course, R,for the many inspiring ideas and tools. I
really appreciate thefeedback from many early betatesters. This package was initiated based on
the design of decumar.
FOAS
knitr is proudly affiliated with the Foundation for Open
Misc
http://yihui.name/knitr/
13 / 41
knitr: Elegant, flexible and fast dynamic report generation with R | knitr
30/07/2016
Obviously the package name knitr was coined with weave in mind, and italso aims to be neater. I
thank Hadley,Di andAndrew for discussions on this neatname.
If you have any questions, please consider asking them on StackOverflow, where you may get more attention and
fast answers.
0 Comments
Recommend
1 Login
Yihui Xie
Share
Sort by Best
2014
1967
fan
Subscribe
Tweet
Yihui Xie
Shicheng Guo
Privacy
76
http://yihui.name/knitr/
14 / 41
ggplot2
30/07/2016
ggplot2
ggplot2 is a plotting system for R, based on the grammar of graphics, which tries to take the good parts of base and lattice graphics and none of the bad parts. It takes
care of many of the fiddly details that make plotting a hassle (like drawing legends) as well as providing a powerful model of graphics that makes it easy to produce
complex multi-layered graphics.
Documentation
You are welcome to ask ggplot2 questions on R-help, but if youd like to participate in a more focussed mailing list, please sign up for the ggplot2 mailing list:
Your email address:
Subscribe
You must be a member to post messages, but anyone can read the archived discussions.
Installation
install.packages("ggplot2")
(Youll need to make sure you have the most recent version of R to get the most recent version of ggplot)
Books about ggplot2
The R Graphics Cookbook by Winston Chang provides a set of recipes to solve common graphics problems. Read this book if you want to start making standard
graphics with ggplot2 as quickly as possible.
ggplot2: Elegant Graphics for Data Analysis by Hadley Wickham describes the theoretical underpinnings of ggplot2 and shows you how all the pieces fit
together. This book helps you understand the theory that underpins ggplot2, and will help you create new types of graphic specifically tailored to your needs.
You can read sample chapters and download the book code from the book website.
Other resources
http://ggplot2.org/
15 / 41
Tidy data
30/07/2016
Hadley Wickham's
Tidy data
Hadley Wickham.
Tidy data.
The Journal of Statistical Software, vol. 59, 2014.
Download: pre-print | from publisher
A huge amount of effort is spent cleaning data to get it ready for analysis, but there has been little
research on how to make data cleaning as easy and effective as possible. This paper tackles a small,
but important, component of data cleaning: data tidying. Tidy datasets are easy to manipulate, model
and visualize, and have a specific structure: each variable is a column, each observation is a row, and
each type of observational unit is a table. This framework makes it easy to tidy messy datasets
because only a small set of tools are needed to deal with a wide range of un-tidy datasets. This
structure also makes it easier to develop tidy tools for data analysis, tools that both input and output
tidy datasets. The advantages of a consistent data structure and matching tools are demonstrated
with a case study free from mundane data manipulation chores.
Built with R, the bibtex package, and brew. Styled with skeleton and subtlepatterns. Hosted on github.
http://vita.had.co.nz/papers/tidy-data.html
16 / 41
30/07/2016
RStudio Blog
readr 0.1.0
S EARCH
Im pleased to announced that readr is now available on CRAN. Readr makes it easy to
read many types of tabular data:
Delimited files withread_delim(), read_csv(), read_tsv(), and read_csv2().
Fixed width files with read_fwf(), and read_table().
Web log files with read_log().
You can install it by running:
install.packages("readr")
Compared to the equivalent base functions, readr functions are around 10x faster. Theyre
also easier to use because theyre more consistent, they produce data frames that are
easier to use (no more stringsAsFactors = FALSE!), they have a more flexible column
specification, and any parsing problems are recorded in a data frame. Each of these
features is described in more detail below.
Input
All readr functions work the same way. There are four important arguments:
file gives the file to read; a url or local path. A local path can point to a a zipped,
bzipped, xzipped, or gzipped file itll be automatically uncompressed in memory
before reading. You can also pass in a connection or a raw vector.
For small examples, you can also supply literal data: if file contains a new line, then
the data will be read directly from the string. Thanks to data.table for this great idea!
library(readr)read_csv("x,y\n1,2\n3,4")#> x y#> 1 1 2#> 2 3 4
col_names: describes the column names (equivalent to header in base R). It has
three possible values:
TRUE will use the the first row of data as column names.
FALSE will number the columns sequentially.
A character vector to use as column names.
col_types: overrides the default column types (equivalent to colClasses in base R).
More on that below.
progress: By default, readr will display a progress bar if the estimated loading time is
greater than 5 seconds. Use progress = FALSE to suppress the progress indicator.
Output
The output has been designed to make your life easier:
Characters are never automatically converted to factors (i.e. no more
stringsAsFactors = FALSE!).
Column names are left as is, not munged into valid R identifiers (i.e. there is no
check.names = TRUE). Use backticks to refer to variables with unusual names, e.g.
df$`Income ($000)`.
The output has class c("tbl_df", "tbl", "data.frame") so if you also use dplyr youll get
an enhanced print method (i.e. youll see just the first ten rows, not the first 10,000!).
Row names are never set.
Column types
Readr heuristically inspects the first 100 rows to guess the type of each columns. This is
not perfect, but its fast and its a reasonable start. Readr can automatically detect these
column types:
col_logical() [l], contains only T, F, TRUE or FALSE.
col_integer() [i], integers.
col_double() [d], doubles.
col_euro_double() [e], Euro doubles that use , as the decimal separator.
col_date() [D]: Y-m-d dates.
col_datetime() [T]: ISO8601 date times
col_character() [c], everything else.
You can manually specify other column types:
col_skip() [_], dont import this column.
col_date(format) and col_datetime(format, tz), dates or date times parsed with given
format string. Dates and times are rather complex, so theyre described in more detail
in the next section.
col_numeric() [n], a sloppy numeric parser that ignores everything apart from 0-9, and . (this is useful for parsing currency data).
col_factor(levels, ordered), parse a fixed set of known values into a (optionally
ordered) factor.
LINKS
Contact Us
Development @ Github
RStudio Support
RStudio Website
R-bloggers
CATEGORIES
Featured
News
Packages
R Markdown
RStudio IDE
Shiny
shinyapps.io
Training
Uncategorized
ARCHIVES
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
June 2013
April 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
June 2012
May 2012
January 2012
October 2011
June 2011
April 2011
February 2011
There are two ways to override the default choices with the col_types argument:
Use a compact string: "dc__d". Each letter corresponds to a column so this
specification means: read first column as double, second as character, skip the next
two and read the last column as a double. (Theres no way to use this form with
column types that need parameters.)
With a (named) list of col objects:
read_csv("iris.csv", col_types = list( Sepal.Length = col_double(), Sepal.Width = col_double(), Petal.Length = col_double(), Petal.Width = col_double(), Species = col_factor(c("setosa", "ve
Any omitted columns will be parsed automatically, so the previous call is equivalent
to:
read_csv("iris.csv", col_types = list( Species = col_factor(c("setosa", "versicolor", "virginica")))
DATES AND TIMES
One of the most helpful features of readr is its ability to import dates and date times. It
can automatically recognise the following formats:
Dates in year-month-day form: 2001-10-20 or 2010/15/10 (or any non-numeric
separator). It cant automatically recongise dates in m/d/y or d/m/y format because
theyre ambiguous: is 02/01/2015 the 2nd of January or the 1st of February?
Date times as ISO8601 form: e.g. 2001-02-03 04:05:06.07 -0800, 20010203 040506,
20010203 etc. I dont support every possible variant yet, so please let me know if it
doesnt work for your data (more details in ?parse_datetime).
If your dates are in another format, dont despair. You can use col_date() and
col_datetime() to explicit specify a format string. Readr implements its own strptime()
equivalent which supports the following format strings:
Year: \%Y (4 digits). \%y (2 digits); 00-69 -> 2000-2069, 70-99 -> 1970-1999.
Month: \%m (2 digits), \%b (abbreviated name in current locale), \%B (full name in
https://blog.rstudio.org/2015/04/09/readr-0-1-0/
17 / 41
30/07/2016
current locale).
Day: \%d (2 digits), \%e (optional leading space)
Hour: \%H
Minutes: \%M
Seconds: \%S (integer seconds), \%OS (partial seconds)
Time zone: \%Z (as name, e.g. America/Chicago), \%z (as offset from UTC, e.g.
+0800)
Non-digits: \%. skips one non-digit charcater, \%* skips any number of non-digit
characters.
Shortcuts: \%D = \%m/\%d/\%y, \%F = \%Y-\%m-\%d, \%R = \%H:\%M, \%T =
\%H:\%M:\%S, \%x = \%y/\%m/\%d.
To practice parsing date times with out having to load the file each time, you can use
parse_datetime() and parse_date():
parse_date("2015-10-10")#> [1] "2015-10-10"parse_datetime("2015-10-10 15:14")#> [1] "2015-10-10 15:14:00 UTC"parse_date("02/01/2015", "%m/%d/%Y")#> [1] "2015-02-01"parse_date("02/01/20
Problems
If there are any problems parsing the file, the read_ function will throw a warning telling
you how many problems there are. You can then use the problems() function to access a
data frame that gives information about each problem:
csv <- "x,y1,ab,2"df <- read_csv(csv, col_types = "ii")#> Warning: 2 problems parsing literal data. See problems(...) for more#> details.problems(df)#> row col expected actual#> 1 1 2 an inte
Helper functions
Readr also provides a handful of other useful functions:
read_lines() works the same way as readLines(), but is a lot faster.
read_file() reads a complete file into a string.
type_convert() attempts to coerce all character columns to their appropriate type. This
is useful if you need to do some manual munging (e.g. with regular expressions) to
turn strings into numbers. It uses the same rules as the read_* functions.
write_csv() writes a data frame out to a csv file. Its quite a bit faster than write.csv()
and it never writes row.names. It also escapes " embedded in strings in a way that
read_csv() can read.
Development
Readr is still under very active development. If you have problems loading a dataset,
please try the development version, and if that doesnt work, file an issue.
S HA R E
THIS :
More
R EL A TED
haven 0.1.0
dplyr 0.2
haven 0.1.0
dplyr 0.2
52 comments
myschizobuddy
package readr is available as a source package but not as a binary
hadleywickham
You might need to check that youre using a good mirror.
myschizobuddy
for tmy3 dataset in csv, the first row has different number of columns and denotes
location. Second row is the column names and third and above are the values. How
can I read the first row separate from the rest of the file. Then read the rest of the file
from second row.
Here is an example file
http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/data/tmy3/722287TYA.CSV
myschizobuddy
Found read_lines(). thanks anyway
hadleywickham
Id do `nskip = 2` and supply the column names with `col_names`
Waldir Leoncior
Nice, cant wait to try it at work tomorrow! By the way, you can try to guess dates in
m/d/y and d/m/y format by looking for numbers greater than 12. If you find any in the
first position, youve got d/m/y.
hadleywickham
I think that strategy is too risky, and its not so hard to specify the date format
that youre actually using (or switch to something unambiguous)
https://blog.rstudio.org/2015/04/09/readr-0-1-0/
18 / 41
30/07/2016
Alice
Great work! Thank Hadley.
I just tried your package. Impressive.
Atish Munje
Just curious..How does it compares to fread from data.table package, in terms of
speed and handling of large files?
myschizobuddy
this comparison is available on the github page of this project.
hadleywickham
There are some notes in the README:
https://github.com/hadley/readr#compared-to-fread
Mark
I just tested it on a 1.4 GB file with 10 columns and 14.6 million rows. fread was
8.35 seconds, read_csv was 21.9 seconds. However, I had a date field that
needed to be converted from character (fread does not handle dates). That was
added (within data.table) and the whole process was then 23.85 seconds, which
was then slightly slower than read_csv. So, it depends if you have to read in
dates or other types not handled by fread, then read_csv could be faster. But
fread is faster in terms of raw get it into R speed.
sebschub
Hadley, could you please stop being so amazing. I feel so unproductive!
hadleywickham
Haha, no can do
mkborregaard
Why is this launched as a stand-alone package rather than displacing the functions of
base R?
Michael Sumner
Because thats how R works. Imagine if there was a new young upstart every 30
years or so, it would be chaos
Albert Gifi
Providing patches to the R sources is also how R works (e.g. to speed up
the relevant functions).
hadleywickham
Because replacing the functions in base R would break the many many many
existing uses.
dmenne
A factor of almost 100 (52 seconds/0.6 seconds) over read.table for a real output file
from NONMEM, a program from pharmacology research. fread failed to read it because
of nasty column names like OMEGA(11,3). Great cleanup of the standard toolkit.
Arun
@dmenne, Id first suggest trying it with `fread` from 1.9.5 (on github). Several
improvements and fixes were made to fread to make it more robust.
And if it still doesnt solve the issue, itd be much helpful if you could file an issue
on our project page so that it could be fixed.
Thanks,
Arun.
dmenne
Will do. I had reported the problem already a few years ago (not on github)
On testing, I also found that read_table is fast, but not exactly what
read.table correctly does
Robert Young
the handling of problems is much like DB2/LUWs data LOAD command, which writes
bad rows to a holding table for further review. very useful
KJ
I have a file where the column names use dots instead of spaces (ie Name.Last,
Name.First). When I use the base read.csv() function, it preserves the dot in the
column name. Using read_csv() or read_tsv(), it replaces the dot with a space, making
it unusable without extra work to fix. Can readr preserve the dot or automatically
change the column name to a usable version of the name?
hadleywickham
Could you please file a reproducible bug report at
https://github.com/hadley/readr/issues ?
https://blog.rstudio.org/2015/04/09/readr-0-1-0/
19 / 41
30/07/2016
KJ
Im sorry. I got it backwards. The original csv file has spaces in the column
names. The base function read.csv() puts periods in place of spaces but
read_csv() leaves the column name as-is. Can readr provide the
convenience of removing or replacing spaces during import?
hadleywickham
You can either use `make.names()` yourself, or use backticks to select the
weird names (see the example above)
KJ
Didnt know about make.names(). Perfect! Thanks!
Haruhiko Okumura
Base functions support file encodings such as read.csv(, fileEncoding=SJIS).
Wed be grateful if readr functions could support at least SJIS or its superset CP932
(MS Code Page 932, used by Windows in our locale).
Why are we still using CP932? Because Excel (even 2013) in our locale can only read
CP932 csv files.
hadleywickham
Encoding support is planned for the next version.
Satish Rajan
.. you are awesome hadley thanks so much for this ..
Mark
Do you anticipate functionality that will allow one to read in the file in chunks of rows
(for batch processing of a large file), or to read in a subset of rows? Or is that already
there?
Mark
would skip and n_max allow that?
hadleywickham
Its planned.
okumuralab
Feature request: comment.char = #
anspiess
Wrt to Marks comment, I believe that the problem with all functions that have
nlines/skip functionality (i.e. read.table, fread, readLines etc) is that they have to read
in n chuncks silently into memory before making the n+1 chunk available. This is why,
on my 8GB RAM machine, I fail to load 1GB chunks that are at the end of some 30GB
fastq RNA sequencing file. Does anybody have a solution for that (Hadley ?)?
Cheers,
Andrej
Mark
until readr supports it, there is a LaF package which is supposed to do this. It is
fast, but has some idiosyncrasies. Might be worth a look until this is in readr.
ajdamico
thank you
A must for everybody using R!
New packages to read in data |
Moritz S. Schmid
[] the RStudio team (who brought u dplyr and ggplot2!) released two new packages
for reading in data: readr package for reading text data, and readxl package for
reading excel into R. In tests they outperformed []
Michael
Im not so sure about your performance claims:
> system.time(OCC1 system.time(OCC2 <- read.csv("OCCURRENCE.csv",
stringsAsFactors=FALSE))
user system elapsed
1.21 0.03 1.26
(Note that in this case the progress bar appeared when the progress reached 100%
this is probably a bug.)
Michael
Hmm, my pasted code did not appear properly. Maybe the less-than sign in the
assignment operator was interpreted as HTML??? Ill just paste the output:
readr:
user system elapsed
0.28 0.03 7.47
base:
user system elapsed
1.21 0.03 1.26
https://blog.rstudio.org/2015/04/09/readr-0-1-0/
20 / 41
30/07/2016
hadleywickham
Could you please file a reproducible example on
https://github.com/hadley/readr/issues? You mustve hit some strange corner
case. (Also notice that you need to benchmark loading each function a couple of
times often the first load is slower than the others because of OS caching)
Michael
Hi Hadley: I cant reproduce the issue now. The readr code is outperforming
the base code. I havent made any changes to my computer so Im puzzled.
Ill keep an eye on it and raise an issue if I can reproduce it.
junchenfeng
I am trying to load 2015-03-23T20:09:37 with read_delim, it returns NA with the
warning message of :
Error in withCallingHandlers(expr, warning = function(w)
invokeRestart(muffleWarning)) :
argument x is missing, with no default
hadleywickham
Can you please file a reproducible example on github?
junchenfeng
Done: Error reading time string #144
T hierry Gosselin
How can you forget lines that starts with # while importing in R with readr ?
hadleywickham
You cant currently Ill probably add that feature in the next release.
T hierry Gosselin
Great! lots of file in genomics starts with commented lines, using skip
during import require more steps (looking at the files).
Will be a very useful feature!
Follow
FOLLOW
RSTUDIO
BLOG
Get every new post delivered to
your Inbox.
Join 19,769 other followers
Enter your email address
Sign me up
Build a website with
WordPress.com
https://blog.rstudio.org/2015/04/09/readr-0-1-0/
21 / 41
30/07/2016
date
ymd("20110604")
## [1] "2011-06-04"
mdy("06-04-2011")
## [1] "2011-06-04"
dmy("04/06/2011")
## [1] "2011-06-04"
Lubridate's parse functions handle a wide variety of formats and separators, which simplifies the parsing process.
If your date includes time information, add h, m, and/or s to the name of the function. ymd_hms is probably the most common date time
format. To read the dates in with a certain time zone, supply the official name of that time zone in the tz argument.
arrive <- ymd_hms("2011-06-04 12:00:00", tz = "Pacific/Auckland")arrive
## [1] "2011-06-04 12:00:00 NZST"
leave <- ymd_hms("2011-08-10 14:00:00", tz = "Pacific/Auckland")leave
## [1] "2011-08-10 14:00:00 NZST"
Time Zones
There are two very useful things to do with dates and time zones. First, display the same moment in a different time zone. Second, create
a new moment by combining an existing clock time with a new time zone. These are accomplished by with_tz and force_tz.
For example, a while ago I was in Auckland, New Zealand. I arranged to meet the co-author of lubridate, Hadley, over skype at 9:00 in the
morning Auckland time. What time was that for Hadley who was back in Houston, TX?
meeting <- ymd_hms("2011-07-01 09:00:00", tz = "Pacific/Auckland")with_tz(meeting, "America/Chicago")
## [1] "2011-06-30 16:00:00 CDT"
So the meetings occurred at 4:00 Hadley's time (and the day before no less). Of course, this was the same actual moment of time as 9:00
in New Zealand. It just appears to be a different day due to the curvature of the Earth.
What if Hadley made a mistake and signed on at 9:00 his time? What time would it then be my time?
mistake <- force_tz(meeting, "America/Chicago")with_tz(mistake, "Pacific/Auckland")
## [1] "2011-07-02 02:00:00 NZST"
His call would arrive at 2:00 am my time! Luckily he never did that.
Time Intervals
You can save an interval of time as an Interval class object with lubridate. This is quite useful! For example, my stay in Auckland lasted
from June 4, 2011 to August 10, 2011 (which we've already saved as arrive and leave). We can create this interval in one of two ways:
auckland <- interval(arrive, leave) auckland
https://cran.r-project.org/web/packages/lubridate/vignettes/lubridate.html
22 / 41
30/07/2016
My mentor at the University of Auckland, Chris, traveled to various conferences that year including the Joint Statistical Meetings (JSM).
This took him out of the country from July 20 until the end of August.
jsm <- interval(ymd(20110720, tz = "Pacific/Auckland"), ymd(20110831, tz = "Pacific/Auckland"))jsm
## [1] 2011-07-20 NZST--2011-08-31 NZST
Then I better make hay while the sun shines! For what part of my visit will Chris be there?
setdiff(auckland, jsm)
## [1] 2011-06-04 12:00:00 NZST--2011-07-20 NZST
Other functions that work with intervals include int_start, int_end, int_flip, int_shift, int_aligns, union, intersect, setdiff,
and %within%.
Why two classes? Because the timeline is not as reliable as the number line. The Duration class will always supply mathematically
precise results. A duration year will always equal 365 days. Periods, on the other hand, fluctuate the same way the timeline does to give
intuitive results. This makes them useful for modeling clock times. For example, durations will be honest in the face of a leap year, but
periods may return what you want:
leap_year(2011) ## regular year
## [1] FALSE
ymd(20110101) + dyears(1)
## [1] "2012-01-01"
ymd(20110101) + years(1)
## [1] "2012-01-01"
leap_year(2012) ## leap year
## [1] TRUE
ymd(20120101) + dyears(1)
## [1] "2012-12-31"
ymd(20120101) + years(1)
## [1] "2013-01-01"
You can use periods and durations to do basic arithmetic with date times. For example, if I wanted to set up a reoccuring weekly skype
meeting with Hadley, it would occur on:
meetings <- meeting + weeks(0:5)
Hadley travelled to conferences at the same time as Chris. Which of these meetings would be affected? The last two.
meetings %within% jsm
## [1] FALSE FALSE FALSE TRUE TRUE TRUE
https://cran.r-project.org/web/packages/lubridate/vignettes/lubridate.html
23 / 41
30/07/2016
And so on. Alternatively, we can do modulo and integer division. Sometimes this is more sensible than division - it is not obvious how to
express a remainder as a fraction of a month because the length of a month constantly changes.
auckland %/% months(1)
## [1] 2
auckland %% months(1)
## [1] 2011-08-04 12:00:00 NZST--2011-08-10 14:00:00 NZST
Modulo with an timespan returns the remainder as a new (smaller) interval. You can turn this or any interval into a generalized time span
with as.period.
as.period(auckland %% months(1))
## [1] "6d 2H 0M 0S"
as.period(auckland)
## [1] "2m 6d 2H 0M 0S"
"2013-03-31" NA
"2013-05-31"## [6] NA
"2013-07-31" "2013-08-31" NA
"2013-10-31"## [11] NA
"2
## [1] "2013-02-01" "2013-03-04" "2013-04-01" "2013-05-02" "2013-06-01"## [6] "2013-07-02" "2013-08-01" "2013-09-01" "2013-10-02" "2013-11-01"## [11] "2013-12-02" "2
## [1] "2013-01-31" "2013-02-28" "2013-03-31" "2013-04-30" "2013-05-31"## [6] "2013-06-30" "2013-07-31" "2013-08-31" "2013-09-30" "2013-10-31"## [11] "2013-11-30" "2
Notice that this will only affect arithmetic with months (and arithmetic with years if your start date it Feb 29).
Vectorization
The code in lubridate is vectorized and ready to be used in both interactive settings and within functions. As an example, I offer a function
for advancing a date to the last day of the month
last_day <- function (date) { ceiling_date(date, "month") - days(1)}
Further Resources
To learn more about lubridate, including the specifics of periods and durations, please read the original lubridate paper. Questions about
lubridate can be addressed to the lubridate google group. Bugs and feature requests should be submitted to the lubridate development
page on github.
https://cran.r-project.org/web/packages/lubridate/vignettes/lubridate.html
24 / 41
Personal
Open source
Business
30/07/2016
Explore
Pricing
Blog
This repository
Support
hadley / devtools
Code
Issues 6 4
W a tch
Pull requests 1 9
Wiki
Pulse
Search
134
Sign up
Sign in
Sta r
1,298
F o rk
427
Graphs
ma ster
11 branches
23 releases
N ew p u ll req u est
jimhes ter committed on G itHub Merge pull request #1257 from HenrikBengtsson/hotfix/parse_deps
82 contributors
F in d file
C lo n e o r d o w n lo a d
5 days ago
inst/templates
a month ago
man
a month ago
revdep
Rer-run revdeps
a month ago
src
4 years ago
tests
a month ago
vignettes
2 months ago
.Rbuildignore
2 months ago
.gitattributes
.gitignore
6 months ago
.travis.yml
2 months ago
CONDUCT.md
CONTRIBUTING.md
DESCRIPTION
NAMESPACE
NEWS.md
README.md
2 months ago
appveyor.yml
3 months ago
codecov.yml
2 months ago
cran-comments.md
devtools.Rproj
2 years ago
a year ago
4 months ago
16 days ago
2 months ago
22 days ago
a month ago
2 years ago
README.md
devtools
build error
The aim of devtools is to make package development easier by providing R functions that simplify common tasks.
An R package is actually quite simple. A package is a template or set of conventions that structures your code. This not only
makes sharing code easy, it reduces the time and effort required to complete you project: following a template removes the
need to have to think about how to organize things and paves the way for the creation of standardised tools that can further
accelerate your progress.
While package development in R can feel intimidating, devtools does every thing it can to make it less so. In fact,
devtools comes with a small guarantee: if you get an angry e-mail from an R-core member because of a bug in devtools ,
forward me the email and your address and I'll mail you a card with a handwritten apology.
devtools is opinionated about package development. It requires that you use roxygen2 for documentation and
testthat for testing. Not everyone would agree with this approach, and they are by no means perfect. But they have evolved
https://github.com/hadley/devtools
25 / 41
30/07/2016
devtools::install_github("hadley/devtools")
Windows :
library(devtools)build_github_devtools()#### Restart R before continuing ####install.packages("devtools.zip", repos = NULL, type = "source
of R CMD check .
check_man() runs most of the documentation checking componentsof R CMD check
release() makes sure everything is ok with your package(including asking you a number of questions), then builds
anduploads to CRAN. It also drafts an email to let the CRANmaintainers know that you've uploaded a new package.
Other tips
I recommend adding the following code to your .Rprofile :
.First <- function() { options(
browserNLdisabled = TRUE,
Code of conduct
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide
by its terms.
https://github.com/hadley/devtools
26 / 41
30/07/2016
Reference manual:
magrittr.pdf
Vignettes:
Introducing magrittr
Package source:
magrittr_1.5.tar.gz
Windows binaries:
r-devel: magrittr_1.5.zip, r-release: magrittr_1.5.zip, r-oldrel: magrittr_1.5.zip
OS X Mavericks binaries: r-release: magrittr_1.5.tgz, r-oldrel: magrittr_1.5.tgz
Old sources:
magrittr archive
Reverse dependencies:
Reverse depends: efreadr, forestplot, gitlabr, gwdegree, imager, jug, multiplyr, packagetrackr, sp500SlidingWindow
Reverse imports: analogsea, aoos, archivist, ARTool, bkmr, blscrapeR, bpa, carpenter, ckanr, corrr, curlconverter, cystiSim, datacheckr, datadr, datastepr, ddpcr, dendextend, dlstats, dplyr, dpmr, DT,
dygraphs, easyformatr, ecoengine, emil, engsoccerdata, eyelinker, FeatureHashing, fulltext, genderizeR, geojsonio, gglogo, ggvis, gistr, gmailr, Gmisc, googleAnalyticsR, Greg,
gsheet, heatmaply, highcharter, hpoPlot, htmlTable, httping, HydeNet, icd, igraph, IncucyteDRC, intubate, IsingSampler, jqr, latex2exp, lawn, lazysql, leaflet, lexRankr, lightsout,
linear.tools, livechatR, loopr, MakefileR, manhattanly, manifestoR, mason, metricsgraphics, modellingTools, Momocs, mtconnectR, multipanelfigure, networkD3, nhanesA,
NNTbiomarker, ontologyPlot, optigrab, pixiedust, plotly, poppr, prettyunits, purrr, rangeMapper, rattle, rBayesianOptimization, rbokeh, rdrop2, request, RevEcoR, rex, rgbif, rgho,
rglwidget, rhandsontable, RmarineHeatWaves, rpdo, rprev, rscorecard, rslp, rvest, saeRobust, scrubr, searchable, simmer, simulator, sjPlot, spark, srvyr, ss3sim, stplanr, stringr,
subspaceMOA, survminer, taber, tableHTML, testthat, text2vec, tidyr, tigris, useful, vcfR, vegalite, vembedr, visNetwork, webshot, wellknown, wfindr, wordbankr, xgboost
Reverse suggests: assertr, backpipe, brr, checkmate, curl, DiagrammeR, eemR, ensurer, evolqg, formula.tools, icd9, lettercase, LW1949, mosaic, mpoly, ngramrr, operator.tools, palettetown, pomp,
rAmCharts, ReporteRs, rio, rmapshaper, rmetasim, setter, SocialMediaLab, soql, spAddins, tidyjson, wikipediatrend, xml2
Reverse enhances: cowsay
https://cran.r-project.org/web/packages/magrittr/index.html
27 / 41
30/07/2016
Packrat
< Return to homepage
View the Project on GitHub rstudio/packrat
Isolated: Installing a new or updated package for one project wont breakyour other projects, and vice versa. Thats because packrat gives eachproject its own private package library.
Portable: Easily transport your projects from one computer to another,even across different platforms. Packrat makes it easy to install thepackages your project depends on.
Reproducible: Packrat records the exact package versions you depend on,and ensures those exact versions are the ones that get installed wherever yougo.
Basic concepts
If youre like the vast majority of R users, when you start working on a new Rproject you create a new directory for all of your R scripts and data files.
Packrat enhances your project directory by storing your package dependenciesinside it, rather than relying on your personal R library that is shared acrossall of your other R sessions. We call this
directory your private packagelibrary (or just private library). When you start an R session in apackrat project directory, R will only look for packages in your privatelibrary; and anytime you install or
remove a package, those changes will bemade to your private library.
Unfortunately, private libraries dont travel well; like all R libraries, theircontents are compiled for your specific machine architecture, operating system,and R version. Packrat lets you snapshot the state
of your private library,which saves to your project directory whatever information packrat needs to beable to recreate that same private library on another machine. The process ofinstalling packages to a
private library from a snapshot is calledrestoring.
Installing packrat
Packrat is now available on CRAN, so you can install it with:
> install.packages("packrat")
If you like to live on the bleeding edge, you can also install the developmentversion of Packrat with:
> install.packages("devtools")> devtools::install_github("rstudio/packrat")
Youll also need to make sure your machine is able to build packages fromsource. See Package DevelopmentPrerequisites for thetools needed for your operating system.
Next steps
We highly recommend following our walkthrough guide.
Then check out some of the most common commands.
If youre using RStudio, read the guide to using Packrat with RStudio.
We also have a short list of limitations and caveats you should be aware of.
If you want to set up your own, local, custom CRAN-like repository, you can readSetting Up a Custom CRAN-like Repository.
Need help?
Drop by packrat-discuss andlet us know if you have any questions or comments.
2013-2014 RStudio, Inc.
Hosted on GitHub Pages Theme by orderedlist
http://rstudio.github.io/packrat/
28 / 41
Introduction to stringr
30/07/2016
Introduction to stringr
2015-04-29
Strings are not glamorous, high-profile components of R, but they do play a big role in many data cleaning and
preparations tasks. R provides a solid set of string operations, but because they have grown organically over
time, they can be inconsistent and a little hard to learn. Additionally, they lag behind the string operations in
other programming languages, so that some things that are easy to do in languages like Ruby or Python are
rather hard to do in R. The stringr package aims to remedy these problems by providing a clean, modern
interface to common string operations.
More concretely, stringr:
Simplifies string operations by eliminating options that you dont need 95% of the time (the other 5% of
the time you can functions from base R or stringi).
Uses consistent function names and arguments.
Produces outputs than can easily be used as inputs. This includes ensuring that missing inputs result in
missing outputs, and zero length inputs result in zero length outputs. It also processes factors and
character vectors in the same way.
Completes Rs string handling functions with useful functions from other programming languages.
To meet these goals, stringr provides two basic families of functions:
basic string operations, and
pattern matching functions which use regular expressions to detect, locate, match, replace, extract, and
split strings.
As of version 1.0, stringr is a thin wrapper around stringi, which implements all the functions in stringr with
efficient C code based on the ICU library. Compared to stringi, stringr is considerably simpler: it provides fewer
options and fewer functions. This is great when youre getting started learning string functions, and if you do
need more of stringis power, you should find the interface similar.
These are described in more detail in the following sections.
and otherwise expands each argument to match the longest. It also accepts negative positions, which are
calculated from the left of the last character. The end position defaults to -1, which corresponds to the
last character.
str_str<- is equivalent to substr<-, but like str_sub it understands negative indices, and replacement
strings not do need to be the same length as the string they are replacing.
Three functions add new functionality:
str_dup() to duplicate the characters within a string.
str_trim() to remove leading and trailing whitespace.
str_pad() to pad a string with extra whitespace on the left, right, or both sides.
Pattern matching
stringr provides pattern matching functions to detect, locate, extract, match, replace, and split strings. Ill
illustrate how they work with some strings and a regular expression designed to match (US) phone numbers:
strings <- c ( "apple",
"329-293-8753",
str_detect() detects the presence or absence of a pattern and returns a logical vector (similar to
grepl()). str_subset() returns the elements of a character vector that match a regular expression
(similar to grep() with value = TRUE)`.
# Which strings contain phone numbers?str_detect (strings, phone)#> [1] FALSE
TRUE
TRUE
str_locate() locates the first position of a pattern and returns a numeric matrix with columns start and
end. str_locate_all() locates all matches, returning a list of numeric matrices. Similar to regexpr() and
gregexpr().
# Where in the string is the phone number located? (loc <- str_locate (strings, phone))#>
NA
NA#> [2,]
12#> [3,]
12#> [4,]
str_extract() extracts text corresponding to the first match, returning a character vector.
str_extract_all() extracts all matches and returns a list of character vectors.
# What are the phone numbers?str_extract (strings, phone)#> [1] NA
"219 733 8965" "329-293-8753" "579-499-7527"str_extract_all (strings, phone)#> [[1]]#> character(0)#> #> [[2]]#> [1] "219 733 8965"
str_match() extracts capture groups formed by () from the first match. It returns a character matrix with
one column for the complete match and one column for each group. str_match_all() extracts capture
groups from all matches and returns a list of character matrices. Similar to regmatches().
# Pull out the three components of the matchstr_match (strings, phone)#>
[,1]
[,2]
[,3]
[,4]
#> [1,] NA
NA
NA
NA
#> [2,] "219 733 8965" "219" "733" "8965"#> [3,] "329-293-8753" "329"
str_replace() replaces the first matched pattern and returns a character vector. str_replace_all()
replaces all matches. Similar to sub() and gsub().
str_replace (strings, phone, "XXX-XXX-XXXX")#> [1] "apple"
str_split_fixed() splits the string into a fixed number of pieces based on a pattern and returns a
character matrix. str_split() splits a string into a variable number of pieces and returns a list of
character vectors.
Arguments
Each pattern matching function has the same first two arguments, a character vector of strings to process and
a single pattern (regular expression) to match. The replace functions have an additional argument specifying
the replacement string, and the split functions have an argument to specify the number of pieces.
Unlike base string functions, stringr offers control over matching not through arguments, but through modifier
functions, regexp(), coll() and fixed(). This is a deliberate choice made to simplify these functions. For
example, while grepl has six arguments, str_detect() only has two.
Regular expressions
To be able to use these functions effectively, youll need a good knowledge of regular expressions, which this
vignette is not going to teach you. Some useful tools to get you started:
A good reference sheet.
A tool that allows you to interactively test what a regular expression will match.
A tool to build a regular expression from an input string.
When writing regular expressions, I strongly recommend generating a list of positive (pattern should match) and
negative (pattern shouldnt match) test cases to ensure that you are matching the correct components.
Another approach is to use the second form of str_replace_all(): if you give it a named vector, it applies each
pattern = replacement in turn:
matches <- col2hex (colors ())names (matches) <- str_c ("\\b", colors (), "\\b")str_replace_all (strings, matches)#> [1] "Roses are #FF0000, violets are #0000FF"#> [2] "My favourite colour is #00FF00"
Conclusion
stringr provides an opinionated interface to strings in R. It makes string processing simpler by removing
uncommon options, and by vigorously enforcing consistency across functions. I have also added new functions
that I have found useful from Ruby, and over time, I hope users will suggest useful functions from other
programming languages. I will continue to build on the included test suite to ensure that the package behaves
as expected and remains bug free.
https://cran.r-project.org/web/packages/stringr/vignettes/stringr.html
29 / 41
Personal
Open source
30/07/2016
Business
Explore
Pricing
Blog
Support
This repository
hadley / dplyr
Code
W a tch
Issues 1 1 0
Pull requests 1 2
Pulse
Search
204
Sign up
Sign in
Sta r
1,345
F o rk
520
Graphs
ma ster
17 branches
13 releases
N ew p u ll req u est
89 contributors
F in d file
C lo n e o r d o w n lo a d
a month ago
data
inst
man-roxygen
man
a month ago
revdep
a month ago
src
a month ago
tests
a month ago
vignettes
a month ago
.Rbuildignore
a month ago
.gitattributes
add .gitattributes
8 months ago
.gitignore
2 months ago
.travis.yml
2 months ago
DESCRIPTION
LICENSE
NAMESPACE
NEWS.md
a month ago
README.Rmd
2 months ago
README.md
5 months ago
codecov.yml
a month ago
cran-comments.md
a month ago
dplyr.Rproj
3 years ago
a month ago
2 years ago
a month ago
a year ago
2 months ago
2 years ago
README.md
dplyr
CRAN 0.5.0
coverage 59%
dplyr is the next iteration of plyr, focussed on tools for working with data frames (hence the d in the name). It has three main
goals:
Identify the most important data manipulation tools needed for data analysis and make them easy to use from R.
Provide blazing fast performance for in-memory data by writing key pieces in C++.
Use the same interface to work with data no matter where it's stored, whether in a data frame, a data table or database.
You can install:
the latest released version from CRAN with
install.packages("dplyr")
You'll probably also want to install the data packages used in most examples: install.packages(c("nycflights13",
"Lahman")) .
If you encounter a clear bug, please file a minimal reproducible example on github. For questions and other discussion, please
use the manipulatr mailing list.
Learning dplyr
To get started, read the notes below, then read the intro vignette: vignette("introduction", package = "dplyr") . To make
the most of dplyr, I also recommend that you familiarise yourself with the principles of tidy data: this will help you get your
data into a form that works well with dplyr, ggplot2 and R's many modelling functions.
https://github.com/hadley/dplyr
30 / 41
30/07/2016
year month
Each tbl also comes in a grouped variant which allows you to easily perform operations "by group":
carriers_df <- flights %>% group_by(carrier)carriers_db1 <- flights_db1 %>% group_by(carrier)carriers_db2 <- flights_db2 %>% group_by(carrier
They all work as similarly as possible across the range of data sources. The main difference is performance:
system.time(carriers_df %>% summarise(delay = mean(arr_delay)))#>
0.040
0.001
0.043system.time(carriers_db1 %
Data frame methods are much much faster than the plyr equivalent. The database methods are slower, but can work with
data that don't fit in memory.
system.time(plyr::ddply(flights, "carrier", plyr::summarise, delay = mean(arr_delay, na.rm = TRUE)))#>
0.104
0.029
do()
As well as the specialised operations described above, dplyr also provides the generic do() function which applies any R
function to each group of the data.
Let's take the batting database from the built-in Lahman database. We'll group it by year, and then fit a model to explore the
relationship between their number of at bats and runs:
by_year <- lahman_df() %>%
do(mod = lm(R ~ AB, data = .))#> Source: local data frame [144 x 2]
Note that if you are fitting lots of linear models, it's a good idea to use biglm because it creates model objects that are
considerably smaller:
by_year %>%
do(mod = lm(R ~ AB, data = .)) %>% object.size() %>% print(unit = "MB")#> 22.7 Mbby_year %>%
https://github.com/hadley/dplyr
31 / 41
0.
30/07/2016
Plyr compatibility
You'll need to be a little careful if you load both plyr and dplyr at the same time. I'd recommend loading plyr first, then dplyr, so
that the faster dplyr functions come first in the search path. By and large, any function provided by both dplyr and plyr works
in a similar way, although dplyr functions tend to be faster and more general.
Related approaches
Blaze
|Stat
Pig
https://github.com/hadley/dplyr
32 / 41
Personal
Open source
30/07/2016
Business
Explore
Pricing
Blog
Support
This repository
hadley / haven
Code
Search
W a tch
Issues 7
Pull requests 0
Pulse
21
Sign up
Sign in
Sta r
156
F o rk
39
Graphs
ma ster
4 branches
4 releases
N ew p u ll req u est
18 contributors
F in d file
aghaynes committed with hadley Complete validation of stata variable names (#205)
C lo n e o r d o w n lo a d
2 days ago
inst/examples
2 months ago
man
2 months ago
revdep
Update revdeps
2 months ago
src
Update ReadStat
a month ago
tests
vignettes
2 months ago
.Rbuildignore
2 months ago
.gitignore
.travis.yml
DESCRIPTION
LICENSE
NAMESPACE
NEWS.md
README.md
Import tibble
2 months ago
codecov.yml
2 months ago
cran-comments.md
haven.Rproj
Rename to haven
a month ago
a year ago
2 months ago
4 days ago
a year ago
2 months ago
a month ago
a year ago
2 years ago
README.md
Haven
build passing coverage 88% CRAN 0.2.1
Haven allows you to load foreign data formats (SAS, Spss and Stata) in to R by wrapping the fantastic ReadStat C library
written by Evan Miller. Haven offers similar functionality to the base foreign package but:
Can read SAS's proprietary binary format (SAS7BDAT). The one other package onCRAN that does that, sas7bdat,was
created to document the reverse-engineering effort. Thus its implementationis designed for experimentation, rather than
efficiency. Haven is significantlyfaster and should also support a wider range of SAS files (including compressed), and
works with SAS7BCAT files.
It can be faster. Some spss files seem to load about 4x faster, but others load slower. If you have a lot of SPSS files to
import, you mightwant to benchmark both and pick the fastest.
Works with Stata 13 and 14 files (foreign only works up to Stata 12).
Can also write SPSS and Stata files (This is hard to test so if yourun into any problems, please let me know).
Can only read the data from the most common statistical packages (SAS, Stata and SPSS).
All functions return tibbles.
Date times are converted to corresponding R classes and labelled vectors are returned as a new labelled class. You
can easily coerce to factors or replace labelled values with missings as appropriate.
Uses underscores instead of dots ;)
Haven is still a work in progress so please file an issue if it fails to correctly load a file that you're interested in.
Installation
# Install the released version from CRAN:install.packages("haven")# Install the cutting edge development version from GitHub:# install.packages("devtoo
Usage
SAS: read_sas("path/to/file")
SPSS: read_sav("path/to/file")
Stata: read_dta("path/to/file")
https://github.com/hadley/haven
33 / 41
30/07/2016
Updating readstat
If you're working on the development version of haven, and you'd like to update the embedded ReadStat library, you can run
the following code. It is not necessary if you're just using the package.
tmp <- tempfile()download.file("https://github.com/WizardMac/ReadStat/archive/master.zip", tmp,
https://github.com/hadley/haven
34 / 41
30/07/2016
Leaflet for R
Introduction The Map Widget Basemaps Markers Popups Lines and Shapes JSON Raster Images Shiny Integration Colors Legends
Show/Hide Layers
Introduction
Leaflet is one of the most popular open-source JavaScript libraries for interactive maps. Its used by websites ranging from The New York
Times and The Washington Post to GitHub and Flickr, as well as GIS specialists like OpenStreetMap, Mapbox, and CartoDB.
This R package makes it easy to integrate and control Leaflet maps in R.
Features
Interactive panning/zooming
Compose maps using arbitrary combinations of:
Map tiles
Markers
Polygons
Lines
Popups
GeoJSON
Create maps right from the R console or RStudio
Embed maps in knitr/R Markdown documents and Shiny apps
Easily render Spatial objects from the sp package, or data frames with latitude/longitude columns
Use map bounds and mouse events to drive Shiny logic
Installation
To install this R package, run this command at your R prompt:
install.packages("leaflet")# to install the development version from Github, run# devtools::install_github("rstudio/leaflet")
Once installed, you can use this package at the R console, within R Markdown documents, and within Shiny applications.
Basic Usage
You create a Leaflet map with these basic steps:
1.
2.
3.
4.
In case youre not familiar with the magrittr pipe operator (%>%), here is the equivalent without using pipes:
m <- leaflet()m <- addTiles(m)m <- addMarkers(m, lng=174.768, lat=-36.852, popup="The birthplace of R")m
Next Steps
We highly recommend that you proceed to The Map Widget page before exploring the rest of this site, as it describes common idioms
well use throughout the examples on the other pages.
Although we have tried to provide an R-like interface to Leaflet, you may want to check out the API documentation of Leaflet occasionally
when the meanings of certain parameters are not clear to you.
https://rstudio.github.io/leaflet/
35 / 41
R Markdown
R Markdown v2
30/07/2016
Home
Authoring
Formats
Developer
Articles
R Markdown
Dynamic Documents for R
R Markdown is an authoring format that enables easy creation of
dynamic documents, presentations, and reports from R. It combines
the core syntax of markdown (an easy to write plain text format) with
embedded R code chunks that are run so their output can be included
in the final document.
R Markdown documents are fully reproducible (they can be
automatically regenerated whenever underlying R code or data
changes).
R Markdown has many available output formats including HTML, PDF,
MS Word, Beamer, HTML5 slides, Tufte handouts, notebooks, books,
dashboards, and websites.
Getting Started
Quick Tour
R Markdown Cheat Sheet
R Markdown Reference Guide
Learning More
With the basics described above you can get started with R Markdown right away. To learn more see:
Markdown Basics, which describes the most commonly used markdown constructs.
R Code Chunks, which goes into more depth on customizing the behavior of embedded R code.
R Markdown Cheat Sheet (PDF), a quick guide to the most commonly used markdown syntax, knitr options, and
output formats.
R Markdown Reference Guide (PDF), a more comprehensive reference guide to markdown, knitr, and output format
options.
Bibliographies and Citations, which describes how to include references in R Markdown documents.
Interactive Documents with Shiny, which describes how to make R Markdown documents interactive using Shiny.
Compiling Reports from R Scripts, which describes how to compile HTML, PDF, or MS Word reports from R scripts.
Document output formats: HTML, PDF, Word, Markdown, and GitHub.
Presentation output formats: ioslides, reveal.js, Slidy, and Beamer
For even more in-depth documentation see:
The website for the knitr package. Knitr is an extremely powerful tool for dynamic content generation and the
website has a wealth of documentation and examples to help you utilize it to its full potential.
The full specification of Pandoc Markdown, which describes all of the markdown features and syntax available
within R Markdown documents.
If you are migrating documents from R Markdown v1 or wish to continue using RMarkdown v1 see the article on
Migrating from R Markdown v1.
See also the R Markdown developer documentation including:
Creating re-usable Document Templates
Guide to Creating New Formats for R Markdown
Adding interactive components to R Markdown documents using HTML Widgets and Shiny Widgets.
Creating Parameterized Reports to re-render the same document with distinct values for various key inputs.
http://rmarkdown.rstudio.com/
36 / 41
R Markdown
http://rmarkdown.rstudio.com/
30/07/2016
37 / 41
R Markdown
http://rmarkdown.rstudio.com/
30/07/2016
38 / 41
R Markdown
http://rmarkdown.rstudio.com/
30/07/2016
39 / 41
R Markdown
http://rmarkdown.rstudio.com/
30/07/2016
40 / 41
R Markdown
http://rmarkdown.rstudio.com/
30/07/2016
41 / 41