side - How to plot two histograms together in R?




by multiple (8)

I am using R and I have two data frames: carrots and cucumbers. Each data frame has a single numeric column which lists the length of all measured carrots (total: 100k carrots) and cucumbers (total: 50k cucumbers).

I wish to plot two histogram - carrot length and cucumbers lengths - on the same plot. They overlap, so I guess I also need some transparency. I also need to use relative frequencies not absolute numbers since the number of instances in each group is different.

something like this would be nice but I don't understand how to create it from my two tables:


Answers

@Dirk Eddelbuettel: The basic idea is excellent but the code as shown can be improved. [Takes long to explain, hence a separate answer and not a comment.]

The hist() function by default draws plots, so you need to add the plot=FALSE option. Moreover, it is clearer to establish the plot area by a plot(0,0,type="n",...) call in which you can add the axis labels, plot title etc. Finally, I would like to mention that one could also use shading to distinguish between the two histograms. Here is the code:

set.seed(42)
p1 <- hist(rnorm(500,4),plot=FALSE)
p2 <- hist(rnorm(500,6),plot=FALSE)
plot(0,0,type="n",xlim=c(0,10),ylim=c(0,100),xlab="x",ylab="freq",main="Two histograms")
plot(p1,col="green",density=10,angle=135,add=TRUE)
plot(p2,col="blue",density=10,angle=45,add=TRUE)

And here is the result (a bit too wide because of RStudio :-) ):


That image you linked to was for density curves, not histograms.

If you've been reading on ggplot then maybe the only thing you're missing is combining your two data frames into one long one.

So, let's start with something like what you have, two separate sets of data and combine them.

carrots <- data.frame(length = rnorm(100000, 6, 2))
cukes <- data.frame(length = rnorm(50000, 7, 2.5))

#Now, combine your two dataframes into one.  First make a new column in each that will be a variable to identify where they came from later.
carrots$veg <- 'carrot'
cukes$veg <- 'cuke'

#and combine into your new data frame vegLengths
vegLengths <- rbind(carrots, cukes)

After that, which is unnecessary if your data is in long formal already, you only need one line to make your plot.

ggplot(vegLengths, aes(length, fill = veg)) + geom_density(alpha = 0.2)

Now, if you really did want histograms the following will work. Note that you must change position from the default "stack" argument. You might miss that if you don't really have an idea of what your data should look like. A higher alpha looks better there. Also note that I made it density histograms. It's easy to remove the y = ..density.. to get it back to counts.

ggplot(vegLengths, aes(length, fill = veg)) + 
   geom_histogram(alpha = 0.5, aes(y = ..density..), position = 'identity')


Here is an even simpler solution using base graphics and alpha-blending (which does not work on all graphics devices):

set.seed(42)
p1 <- hist(rnorm(500,4))                     # centered at 4
p2 <- hist(rnorm(500,6))                     # centered at 6
plot( p1, col=rgb(0,0,1,1/4), xlim=c(0,10))  # first histogram
plot( p2, col=rgb(1,0,0,1/4), xlim=c(0,10), add=T)  # second

The key is that the colours are semi-transparent.

Edit, more than two years later: As this just got an upvote, I figure I may as well add a visual of what the code produces as alpha-blending is so darn useful:


Plotly's R API might be useful for you. The graph below is here.

library(plotly)
#add username and key
p <- plotly(username="Username", key="API_KEY")
#generate data
x0 = rnorm(500)
x1 = rnorm(500)+1
#arrange your graph
data0 = list(x=x0,
         name = "Carrots",
         type='histogramx',
         opacity = 0.8)

data1 = list(x=x1,
         name = "Cukes",
         type='histogramx',
         opacity = 0.8)
#specify type as 'overlay'
layout <- list(barmode='overlay',
               plot_bgcolor = 'rgba(249,249,251,.85)')  
#format response, and use 'browseURL' to open graph tab in your browser.
response = p$plotly(data0, data1, kwargs=list(layout=layout))

url = response$url
filename = response$filename

browseURL(response$url)

Full disclosure: I'm on the team.


Here's a function I wrote that uses pseudo-transparency to represent overlapping histograms

plotOverlappingHist <- function(a, b, colors=c("white","gray20","gray50"),
                                breaks=NULL, xlim=NULL, ylim=NULL){

  ahist=NULL
  bhist=NULL

  if(!(is.null(breaks))){
    ahist=hist(a,breaks=breaks,plot=F)
    bhist=hist(b,breaks=breaks,plot=F)
  } else {
    ahist=hist(a,plot=F)
    bhist=hist(b,plot=F)

    dist = ahist$breaks[2]-ahist$breaks[1]
    breaks = seq(min(ahist$breaks,bhist$breaks),max(ahist$breaks,bhist$breaks),dist)

    ahist=hist(a,breaks=breaks,plot=F)
    bhist=hist(b,breaks=breaks,plot=F)
  }

  if(is.null(xlim)){
    xlim = c(min(ahist$breaks,bhist$breaks),max(ahist$breaks,bhist$breaks))
  }

  if(is.null(ylim)){
    ylim = c(0,max(ahist$counts,bhist$counts))
  }

  overlap = ahist
  for(i in 1:length(overlap$counts)){
    if(ahist$counts[i] > 0 & bhist$counts[i] > 0){
      overlap$counts[i] = min(ahist$counts[i],bhist$counts[i])
    } else {
      overlap$counts[i] = 0
    }
  }

  plot(ahist, xlim=xlim, ylim=ylim, col=colors[1])
  plot(bhist, xlim=xlim, ylim=ylim, col=colors[2], add=T)
  plot(overlap, xlim=xlim, ylim=ylim, col=colors[3], add=T)
}

Here's another way to do it using R's support for transparent colors

a=rnorm(1000, 3, 1)
b=rnorm(1000, 6, 1)
hist(a, xlim=c(0,10), col="red")
hist(b, add=T, col=rgb(0, 1, 0, 0.5) )

The results end up looking something like this:


Here's the version like the ggplot2 one I gave only in base R. I copied some from @nullglob.

generate the data

carrots <- rnorm(100000,5,2)
cukes <- rnorm(50000,7,2.5)

You don't need to put it into a data frame like with ggplot2. The drawback of this method is that you have to write out a lot more of the details of the plot. The advantage is that you have control over more details of the plot.

## calculate the density - don't plot yet
densCarrot <- density(carrots)
densCuke <- density(cukes)
## calculate the range of the graph
xlim <- range(densCuke$x,densCarrot$x)
ylim <- range(0,densCuke$y, densCarrot$y)
#pick the colours
carrotCol <- rgb(1,0,0,0.2)
cukeCol <- rgb(0,0,1,0.2)
## plot the carrots and set up most of the plot parameters
plot(densCarrot, xlim = xlim, ylim = ylim, xlab = 'Lengths',
     main = 'Distribution of carrots and cucumbers', 
     panel.first = grid())
#put our density plots in
polygon(densCarrot, density = -1, col = carrotCol)
polygon(densCuke, density = -1, col = cukeCol)
## add a legend in the corner
legend('topleft',c('Carrots','Cucumbers'),
       fill = c(carrotCol, cukeCol), bty = 'n',
       border = NA)


Already beautiful answers are there, but I thought of adding this. Looks good to me. (Copied random numbers from @Dirk). library(scales) is needed`

set.seed(42)
hist(rnorm(500,4),xlim=c(0,10),col='skyblue',border=F)
hist(rnorm(500,6),add=T,col=scales::alpha('red',.5),border=F)

The result is...

Update: This overlapping function may also be useful to some.

hist0 <- function(...,col='skyblue',border=T) hist(...,col=col,border=border) 

I feel result from hist0 is prettier to look than hist

hist2 <- function(var1, var2,name1='',name2='',
              breaks = min(max(length(var1), length(var2)),20), 
              main0 = "", alpha0 = 0.5,grey=0,border=F,...) {    

library(scales)
  colh <- c(rgb(0, 1, 0, alpha0), rgb(1, 0, 0, alpha0))
  if(grey) colh <- c(alpha(grey(0.1,alpha0)), alpha(grey(0.9,alpha0)))

  max0 = max(var1, var2)
  min0 = min(var1, var2)

  den1_max <- hist(var1, breaks = breaks, plot = F)$density %>% max
  den2_max <- hist(var2, breaks = breaks, plot = F)$density %>% max
  den_max <- max(den2_max, den1_max)*1.2
  var1 %>% hist0(xlim = c(min0 , max0) , breaks = breaks,
                 freq = F, col = colh[1], ylim = c(0, den_max), main = main0,border=border,...)
  var2 %>% hist0(xlim = c(min0 , max0),  breaks = breaks,
                 freq = F, col = colh[2], ylim = c(0, den_max), add = T,border=border,...)
  legend(min0,den_max, legend = c(
    ifelse(nchar(name1)==0,substitute(var1) %>% deparse,name1),
    ifelse(nchar(name2)==0,substitute(var2) %>% deparse,name2),
    "Overlap"), fill = c('white','white', colh[1]), bty = "n", cex=1,ncol=3)

  legend(min0,den_max, legend = c(
    ifelse(nchar(name1)==0,substitute(var1) %>% deparse,name1),
    ifelse(nchar(name2)==0,substitute(var2) %>% deparse,name2),
    "Overlap"), fill = c(colh, colh[2]), bty = "n", cex=1,ncol=3) }

The result of

par(mar=c(3, 4, 3, 2) + 0.1) 
set.seed(100) 
hist2(rnorm(10000,2),rnorm(10000,3),breaks = 50)

is


Here's my attempt at a comprehensive answer from the dplyr perspective, following the broad outline of Arun's answer (but somewhat rearranged based on differing priorities).

Syntax

There is some subjectivity to syntax, but I stand by my statement that the concision of data.table makes it harder to learn and harder to read. This is partly because dplyr is solving a much easier problem!

One really important thing that dplyr does for you is that it constrains your options. I claim that most single table problems can be solved with just five key verbs filter, select, mutate, arrange and summarise, along with a "by group" adverb. That constraint is a big help when you're learning data manipulation, because it helps order your thinking about the problem. In dplyr, each of these verbs is mapped to a single function. Each function does one job, and is easy to understand in isolation.

You create complexity by piping these simple operations together with %>%. Here's an example from one of the posts Arun linked to:

diamonds %>%
  filter(cut != "Fair") %>%
  group_by(cut) %>%
  summarize(
    AvgPrice = mean(price),
    MedianPrice = as.numeric(median(price)),
    Count = n()
  ) %>%
  arrange(desc(Count))

Even if you've never seen dplyr before (or even R!), you can still get the gist of what's happening because the functions are all English verbs. The disadvantage of English verbs is that they require more typing than [, but I think that can be largely mitigated by better autocomplete.

Here's the equivalent data.table code:

diamondsDT <- data.table(diamonds)
diamondsDT[
  cut != "Fair", 
  .(AvgPrice = mean(price),
    MedianPrice = as.numeric(median(price)),
    Count = .N
  ), 
  by = cut
][ 
  order(-Count) 
]

It's harder to follow this code unless you're already familiar with data.table. (I also couldn't figure out how to indent the repeated [ in a way that looks good to my eye). Personally, when I look at code I wrote 6 months ago, it's like looking at a code written by a stranger, so I've come to prefer straightforward, if verbose, code.

Two other minor factors that I think slightly decrease readability:

  • Since almost every data table operation uses [ you need additional context to figure out what's happening. For example, is x[y] joining two data tables or extracting columns from a data frame? This is only a small issue, because in well-written code the variable names should suggest what's happening.

  • I like that group_by() is a separate operation in dplyr. It fundamentally changes the computation so I think should be obvious when skimming the code, and it's easier to spot group_by() than the by argument to [.data.table.

I also like that the the pipe isn't just limited to just one package. You can start by tidying your data with tidyr, and finish up with a plot in ggvis. And you're not limited to the packages that I write - anyone can write a function that forms a seamless part of a data manipulation pipe. In fact, I rather prefer the previous data.table code rewritten with %>%:

diamonds %>% 
  data.table() %>% 
  .[cut != "Fair", 
    .(AvgPrice = mean(price),
      MedianPrice = as.numeric(median(price)),
      Count = .N
    ), 
    by = cut
  ] %>% 
  .[order(-Count)]

And the idea of piping with %>% is not limited to just data frames and is easily generalised to other contexts: interactive web graphics, web scraping, gists, run-time contracts, ...)

Memory and performance

I've lumped these together, because, to me, they're not that important. Most R users work with well under 1 million rows of data, and dplyr is sufficiently fast enough for that size of data that you're not aware of processing time. We optimise dplyr for expressiveness on medium data; feel free to use data.table for raw speed on bigger data.

The flexibility of dplyr also means that you can easily tweak performance characteristics using the same syntax. If the performance of dplyr with the data frame backend is not good enough for you, you can use the data.table backend (albeit with a somewhat restricted set of functionality). If the data you're working with doesn't fit in memory, then you can use a database backend.

All that said, dplyr performance will get better in the long-term. We'll definitely implement some of the great ideas of data.table like radix ordering and using the same index for joins & filters. We're also working on parallelisation so we can take advantage of multiple cores.

Features

A few things that we're planning to work on in 2015:

  • the readr package, to make it easy to get files off disk and in to memory, analogous to fread().

  • More flexible joins, including support for non-equi-joins.

  • More flexible grouping like bootstrap samples, rollups and more

I'm also investing time into improving R's database connectors, the ability to talk to web apis, and making it easier to scrape html pages.





r plot histogram