Code Monkey home page Code Monkey logo

bioconductor.org's Introduction

Site Maintainer README for bioconductor.org

The canonical location for this code is https://github.com/Bioconductor/bioconductor.org

You can setup by cloning this repository and running

git clone https://github.com/Bioconductor/bioconductor.org

Then after committing code locally run the following to commit the changes and push the commits back to GitHub.

# commit code to git
git commit -m "My informative commit message"

# push code to github
git push

Unix-ish Developer Required Software

Required software

NOTE: Before reading the following instructions you may want to consider installing the web site as a Docker container. See the instructions below.

  1. Make a fork and clone the git repository for bioconductor.org and create a new branch to make your changes, (helpful documentation for Creating a pull request from a fork)

    git clone https://github.com/<your fork>/bioconductor.org
    
  2. Build a docker image by navigating to the right directory and running this command

    docker build -t <image_name> .
    

    where,

    <image_name> is the name you want to give to the docker image.
    it can be whatever you want. You will need to use it later
    but is only seen and used by you.
    
  3. Run the docker container before making any changes, you need to use the docker image name <image_name> that you assigned previously to be able to run the container. The container has the dependencies installed to rake the ruby code and host the website on your local machine at https://localhost:3000.

    docker run -it -p 3000:3000 \
        -v /<full_path>/bioconductor.org:/opt/bioconductor.org \
            --name <container_name> \
                <image_name> /bin/bash
    

    where,

    -it will take you straight to the container
    
    -p is mapping the container's port 3000 to the host machine's port
    
    -v mounting a volume, the website (bioconductor.org) directory
    from your local machine is being mounted on the docker container
    
    <container_name> is the name you want to give to the docker container.
    It will be easier to access the container later if you give it a name
    

    the command will take you to the container's terminal so you will need to run

    rake
    

    and

    cd output
    adsf
    
  4. Make your changes on this branch, add content or edit content.

  5. Once the changes are made and you want to be able to see them on https://localhost:3000, there are two ways to be able to run rake:

    by running rake inside the docker container shell making sure you are in the /opt/bioconductor.org directory.

    or,

    without needing to access the docker shell but you will need either the container name or CONTAINER ID, you can run

     docker ps
    

    and,

     docker exec <container_id / container_name> rake
    
  6. Then to stop the process, you need to get the container name or CONTAINER ID with,

    docker ps
    

    and,

    docker stop <container_id / container_name>
    
  7. If you wish to completely remove the container from your docker once you stopped it, you can run

    docker rm <container_id / container_name>
    
  8. Once you have reviewed your changes, make a new branch and send a pull

    request to the devel branch. The pull request should be made from your my_changes branch to the devel branch on GitHub.

Ruby

The site requires ruby 2.2.2 or newer.

There are numerous issues on various platforms when attempting to install appropriate versions of ruby and the necessary ruby packages. The simplest way around all of this is to use rbenv, which allows you to switch between various ruby versions and avoids conflicts between them. NOTE: rbenv works on Unix only; if you are on Windows, skip to the Windows section.

On ubuntu, before proceeding, make sure the libsqlite3-dev package is installed (sudo apt-get install libsqlite3-dev).

The following instructions are adapted from the rbenv page. It's worth reading this to understand how rbenv works.

Important note: Never use sudo when working with a ruby that has been installed by rbenv. rbenv installs everything in your home directory so you should never need to become root or fiddle with permissions.

  1. Make sure you do not have rvm installed. which rvm should not return anything. If you do have it installed, refer to this page for instructions on removing it.

  2. Check out rbenv into ~/.rbenv.

    $ git clone https://github.com/rbenv/rbenv.git ~/.rbenv
  3. Add ~/.rbenv/bin to your $PATH for access to the rbenv command-line utility.

    $ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile

    Ubuntu Desktop note: Modify your ~/.bashrc instead of ~/.bash_profile.

    Zsh note: Modify your ~/.zshrc file instead of ~/.bash_profile.

  4. Add rbenv init to your shell to enable shims and autocompletion.

    $ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

    Same as in previous step, use ~/.bashrc on Ubuntu, or ~/.zshrc for Zsh.

  5. Restart your shell so that PATH changes take effect. (Opening a new terminal tab will usually do it.) Now check if rbenv was set up:

    $ type rbenv
    #=> "rbenv is a function"
  6. Install ruby-build, which provides the rbenv install command that simplifies the process of installing new Ruby versions:

    ```sh
    git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
    ```
    

Now you need to install ruby. Go to the Ruby Downloads Page to find out what the current stable version is. As of 3/06/2020 it is 2.7.0 however that particular version still had issues with modules, so I will use 2.6.5 the current stable version we use for the website in further examples, but substitute the current stable version for 2.6.5 if you wish to use or test a different version.

To install this version of ruby in rbenv, type

rbenv install 2.6.5

Then, to make this the version of ruby that you will use, type:

rbenv global 2.6.5

If you want to use different versions of ruby in different contexts, read the rbenv page for more information.

Windows Developer Required Software

  1. Download and run the one-click ruby installer http://rubyinstaller.org/downloads/. Accept all default settings.

  2. Also download and install the Development Kit from http://rubyinstaller.org/downloads/. Be sure and add the bin dir to your path (see devkitvars.bat)

  3. If you don't already have it, be sure and install cygwin and explicitly install rsync. rsync is required for parts of the web site to work.

  4. Install git client. https://git-scm.com/downloads

  5. Follow the developer setup instructions below.

Developer Setup

Checkout the bioconductor.org codebase and set up upstream remote

git clone https://github.com/Bioconductor/bioconductor.org
git remote add upstream [email protected]:admin/bioconductor.org

Installing Necessary Ruby Packages

Ruby packages are called gems and gem is the program used to install them.

To save time, ensure your ~/.gemrc file contains the text

gem: --no-document

This will ensure that gem does not try to install documentation files that you will not use.

The web site comes with a Gemfile which is similar to an R package DESCRIPTION file in that it lists all dependencies needed. Gemfiles are read by the bundler gem, so install that as follows, prepending sudo if necessary (remember, don't use sudo if your ruby was installed with rbenv:

cd bioconductor.org
gem install bundler

Then, assuming you are in the bioconductor.org working copy, issue this command to install all dependencies, again prepending sudo if necessary:

bundle install

Build the site

cd bioconductor.org # if you aren't already in the working copy
rake

One step in the build process runs 'nanoc', "a Ruby web publishing system
for building small to medium-sized websites"; it is one of the gems you installed above. If you ever need to run nanoc explicitly:

nanoc compile

To run an abbreviated compile, which does not attempt to build all package pages:

QUICK_NANOC_COMPILE=true nanoc co

Whether run by hand or by rake, the compiled html files are all found in and below output/, an immediate subdirectory of the bioconductor.org/ directory you have been working in.

Start the built-in development server, 'adsf' "A dead-simple file" server:

cd output
adsf

Test in a browser by going to http://localhost:3000/

Linters

You will require node and npm to install the linters. Installation instructions for your specific OS can be found on the node.js website:

https://nodejs.org/en/download

Or if you would like to use your package manager to install, you can find instructions here:

https://nodejs.org/en/download/package-manager

Install linters: npm ci

This project includes liners for HTML, CSS, JavaScript, and markdown files. instructions for each To run all linters you can use the command npm run lint For each linter here are the options available:

  • stylelint (CSS):

    • npm run lint-css <(optional)directory/file> (default directory is current working directory)
  • eslint (JavaScript):

    • npm run lint-js <(optional)directory/file> (default directory is current working directory)
  • htmllint (HTML)

    • npm run lint-html <(optional)directory/file> (default directory is current working directory)
  • markdownLint (Markdown)

    • npm run lint-md <(required)directory/file>

Overview of site source code

  • README.md :: You are reading this file or a file generated from this file.

  • Rakefile :: A Rakefile is to rake as a Makefile is to make. You can see the available targets by running rake -T in the directory containing Rakefile.

  • Rules :: This is a Ruby syntax file that describes how site content is transformed from its source form into its output form (this is called filtering), what layout to use (layouts are the shared templates), and where to write the output (this is called routing). See the nanoc tutorial and the manual for details.

  • assets :: This directory is not managed by nanoc. It contains files that do not undergo any filtering, layout-ing, or routing. Contents of the assets directory are copied to the output directory using rsync.

  • config.yaml :: Nanoc configuration file for the bioconductor.org site. This file is written in YAML.

  • content :: This is where the bulk of the raw (source form) site content lives. Important details:

          - Each page has two related files:
              a `.yaml` file containing item attributes and
              a `.<extension>` file containing the raw source content
              this can be `.md` or `.html`.
    
          - The default behavior is that a content file like
             `install.md` is filtered into HTML and then written to
             `output/install/index.html`. This scheme allows for
              clean URLs that avoid having a file extension.
    
          - Folders like `about` living inside content have their own default
              `index` files within.
    
  • layouts :: This is where the content templates live. Important details:

          - Files that live directly inside the layout folder are the
              layouts, the content blocks would live inside /component
    
  • lib :: Ruby helper functions and nanoc extensions live here. Files in this directory are automatically loaded by nanoc during site processing.

  • migration :: Documentation and scripts used in the process of migrating the bioconductor.org site from Plone to nanoc.

  • output :: This directory is created when you compile the bioconductor.org site using nanoc. It contains the final static HTML and other assets. Deploying the site means pushing out an update of the contents of output to the live server.

  • scripts :: Helper scripts for managing the site live here.

How to add a page

How to add event

You will use a helper scripts ./scripts/add_event to add event to the site using the following steps:

  1. Always run ./scripts/add_event from the top-level of your website working copy

  2. Run ./scripts/add_event EVENT_NAME This will create an EVENT_NAME.yaml file in the ./content/help/events/ directory

  3. The default EVENT_NAME.yaml file will look like this:

    title: TITLE FOR EVENT_NAME
    location: Seattle, WA, USA
    event_host: FHCRC
    start: 2010-06-29
    end:   2010-06-29
    link:
        text: details and registration
        url: https://secure.bioconductor.org/EVENT_NAME
    
  4. Edit the EVENT_NAME.yaml file

  5. Use git to commit changes and additions by add_event

How to add course material

You will use a helper script ./scripts/course_mgr to add course material to the site. PDF files for labs and presentations as well as course-specific packages and data are not stored in git. The index pages that describe the course and provide links to the materials are stored in git. The course_mgr script will help with index file creation and data transfer.

course_mgr workflow and important tips

To add a course, you will typically perform the following steps (each described in detail below):

  1. Always run ./scripts/course_mgr from the top-level of your website working copy.
  2. Run ./scripts/course_mgr --create COURSE_NAME
  3. Run ./scripts/course_mgr --index COURSE_NAME
  4. Build and preview site
  5. Run ./scripts/course_mgr --push COURSE_NAME
  6. Use git to commit changes and additions made by course_mgr

Using course_mgr

  1. Generate a skeleton course directory structure.

     ./scripts/course_mgr --create seattle-intro
    

    This will create a seattle-intro/ directory in the top-level of your website working copy -- do not add this directory or any files within it to git. Inside will be a course_config.yaml file that will look like this:

      title:
        The title of the course goes here
      start_date: 2010-01-27
      end_date: 2010-01-29
      instructors: ["Someone", "Another"]
      location: "Seattle, USA"
      url: https://secure.bioconductor.org/SeattleJan10/
      tags: ["intro", "seattle", "package"]
      description:
        You can put some description text here.
        Must be indented.
    
  2. Put course materials as files and directories into the skeleton directory. For example, you might end up with a directory like that shown below with two subdirectories, packages and presentation-slides, each containing course materials.

      seattle-intro
      |-- course_config.yaml
      |-- packages
      |   |-- day1_0.0.1.tar.gz
      |   |-- day2_0.0.1.tar.gz
      |   `-- day3_0.0.1.tar.gz
      `-- presentation-slides
          |-- First-steps-presentation.pdf
          |-- Microarray-presentation.pdf
          |-- annotation-presentation.pdf
          `-- sequence-presentation.pdf
    
  3. Now you are ready to create the index files.

       ./scripts/course_mgr --index seattle-intro
       CREATED: content/help/course-notes/2010/01/seattle-intro.(html|yaml)
       COPIED for preview:
         src: ./seattle-intro/*
         dst: output/help/course-notes/2010/01/seattle-intro/
       NEXT STEPS:
       - preview site with 'rake devserver'
         - Use URL: http://localhost:3000/help/course-materials/2010/seattle-intro/
         - edit CREATED files to add descriptions for links
         - if happy, run ./scripts/course_mgr --push 2010/seattle-intro
    

    This will create a course index content item in content filed appropriately based on the metadata provided in course_config.yaml. It will also copy the files and directories you created into the output directory so that you can do a full preview after compiling the site.

  4. If everything looks good, you can sync the data files to the web server (note that we do not put these files in git because large data files are not appropriate for git and they are not likely to change):

        ./scripts/course_mgr --push 2010/seattle-intro
        SYNC:
         src: ./seattle-intro
         dst: [email protected]:/loc/www/bioconductor-test.fhcrc.org/help/course-materials/2010/
        NEXT STEPS: git add/commit changes in contents
    
  5. Finally, "git add" the new course index html and yaml files that were generated in the content directory and commit.

Modifying an existing course

You can edit the pages for an existing course by editing the files in ./content. If you need to add or modify data files, run: ./scripts/course_mgr --pull 2010/course_to_modify

This will create a top-level directory called "course_to_modify". You
can then add or modify course material. When finished, run
  ./scripts/course_mgr --push 2010/course_to_modify

If you have changed the .md or .yaml files, do the following:
  cp course_to_modify/course_to_modify.* content/help/course_materials/2010
  git commit -m "made changes" content/help/course-materials/2010/course_to_modify

Adding course material to the spreadsheet

The page http://www.bioconductor.org/help/course-materials/ is built from the tab-delimited file etc/course_descriptions.tsv.

Add information to this file using a spreadsheet program (Excel, LibreOffice, etc.). Be sure to save in the original tsv format. Note that some spreadsheets insert non-ASCII characters which cause problems. Before committing your changes, check for this in R with:

tools::showNonASCII(readLines("etc/course_descriptions.tsv"))

And if it reports any non-ascii characters (it will show line numbers) fix these in a text editor before committing. Usually the culprit is a non-ascii hyphen that can be replaced with a regular hyphen.

Staging site scheduled update

The biocadmin user's crontab on staging.bioconductor.org is used to schedule site updates every twenty minutes. Below are some details on how the test site is configured.

The site source is located at ~biocadmin/bioc-test-web/bioconductor.org. The deploy_staging Rake task deploys site content to the staging server root on staging.

task :deploy_staging do
  dst = '/loc/www/bioconductor.org'
  site_config = YAML.load_file("./config.yaml")
  output_dir = site_config["output_dir"]
  system "rsync -gvprt --partial --exclude='.git' #{output_dir}/ #{dst}"
end

An update_site shell script updates from git, builds the site, and deploys it using Rake.

#!/bin/bash
cd bioconductor.org && \
  date && \
  git pull && \
  rake real_clean default deploy_staging deploy_production  && \
  date

We keep track of the output of in a local log/update_site.log file and handle log rotation using logrotate. For this we need a config file:

# logrotate.conf
/home/biocadmin/bioc-test-web/log/update_site.log {
  daily
  copytruncate
  rotate 30
  dateext
  compress
  missingok
  su biocadmin biocadmin
 }

The following crontab entries are used to schedule site update, deployment, and log rotation (biocadmin user):

PATH=/usr/bin:/bin:/usr/sbin
[email protected]

# bioconductor-test.fhcrc.org website publishing
*/20 * * * *  cd $HOME/bioc-test-web;./update_site >> log/update_site.log 2>&1
59 23 * * * /usr/sbin/logrotate -f -s /home/biocadmin/bioc-test-web/logrotateState /home/biocadmin/bioc-test-web/logrotateFiles

master.bioconductor.org Apache Configuration

A good resource is available http://en.opensuse.org/Apache_Quickstart_HOWTO. The staging.bioconductor.org builds and stages the website. The website is hosted on master.bioconductor.org through apache. This discusses some of the apache set up.

Apache module config

Apache vhosts config

Edit /etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
  ServerAdmin webmaster@localhost
  ServerName master.bioconductor.org

  # Customized ERROR responses
  ErrorDocument 404 /help/404/index.html
  ErrorDocument 403 /help/403/index.html

  # DocumentRoot: The directory out of which you will serve your
  # documents. By default, all requests are taken from this directory, but
  # symbolic links and aliases may be used to point to other locations.
  DocumentRoot /extra/www/bioc

  # if not specified, the global error log is used
  LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" awstats
  ErrorLog /var/log/apache2/bioconductor-error.log
  CustomLog /var/log/apache2/bioconductor-access.log awstats
  ScriptAlias /cgi-bin/ "/usr/local/awstats/wwwroot/cgi-bin/"
  ScriptAlias /cgi/ "/usr/local/cgi/"

  # don't loose time with IP address lookups
  HostnameLookups Off

  # needed for named virtual hosts
  UseCanonicalName Off

  ServerSignature On

    # For most configuration files from conf-available/, which are
    # enabled or disabled at a global level, it is possible to
    # include a line for only one particular virtual host. For example the
    # following line enables the CGI configuration for this host only
    # after it has been globally disabled with "a2disconf".
    #Include conf-available/serve-cgi-bin.conf

  # doc root
  <Directory /extra/www/bioc/>
      Options FollowSymLinks
      AllowOverride All
      #Controls who can get stuff from this server
      #Order allow,deny
      #Allow from all
      Require all granted

     AddOutputFilterByType DEFLATE text/html text/css application/javascript text/x-js
     BrowserMatch ^Mozilla/4 gzip-only-text/html
     BrowserMatch ^Mozilla/4\.0[678] no-gzip
     BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

  </Directory>

  <Directory /extra/www/bioc/checkResults/>
      Options Indexes FollowSymLinks
  </Directory>
  <Directory /extra/www/bioc/packages/submitted/>
      Options Indexes
  </Directory>
  <Directory /extra/www/bioc/packages/misc/>
      Options Indexes
  </Directory>
  <Directory /extra/www/bioc/pending/>
      Options Indexes
  </Directory>
  <Directory /extra/www/bioc/data/>
      Options Indexes
  </Directory>
  <Directory /extra/www/bioc/packages/3.6/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>
  <Directory /extra/www/bioc/packages/3.7/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>
  <Directory /extra/www/bioc/packages/3.8/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>
  <Directory /extra/www/bioc/packages/3.9/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>
  <Directory /extra/www/bioc/packages/3.10/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>
  <Directory /extra/www/bioc/packages/3.11/bioc/src/contrib/Archive/>
      Options Indexes FollowSymLinks MultiViews
      AllowOverride All
  </Directory>

 # configure the Apache web server to work with Solr
 ProxyRequests Off
 <Proxy *>
   Order deny,allow
   Allow from all
 </Proxy>
 ProxyPass /solr/default/select http://localhost:8983/solr/default/select
 ProxyPreserveHost On
 ProxyStatus On

How to test for broken links

You can run wget as shown below to get a report on 404s for the site. Note that this runs against the staging site so will have a lot of false positives.

wget -r --spider -U "404 check with wget" -o wwwbioc.log http://master.bioconductor.org

Optimize redirects

Currently the redirects are defined using Apache's mod_rewrite in a top-level .htaccess file. This has the advantage of allowing easy revision of the rewrite rules via git that are picked up by Apache on site update. The downside is that using .htaccess files is suboptimal in terms of performance. So before the site is launched, consider the following changes:

  1. Copy the directives in the top-level .htaccess file to the site's vhost config /etc/apache2/vhosts.d/bioconductor-test.conf.

  2. Remove the .htaccess file

  3. Edit the same vhosts.d config file to set Options to None for the top-level directory. This should disable .htaccess files as it isn't enough just to remove the .htaccess file itself.

Site Search

The site search contains several moving parts. The search is built on Apache Solr, which is in turn built on top of Apache Lucene.

How to configure Solr

The default SOLR installation works fine, with the exception of the file example/solr/conf/schema.xml which must be replaced with the version in this subversion repository at etc/solr.The changes in this file enable search query highlighting and snippets.

Solr can be started up as follows (SOLR_HOME is assumed to be the location where the solr tarball has been expanded):

cd $SOLR_HOME/example; java -jar start.jar

How to ensure that Solr is started up at boot time (on master and staging)

On both machines there is an /etc/rc.local and /etc/init.d/rc.local script which starts Solr as above.

How to configure the Apache web server to work with Solr

Using a2enmod, we added support for the "proxy" and "proxy_http" modules to the Apache web server. Then we added the following if it hasn't already to /etc/apache2/sites-available/000-default.conf (master):

ProxyRequests Off
<Proxy *>
 Order deny,allow
 Allow from all
</Proxy>
ProxyPass /solr/default/select http://localhost:8983/solr/default/select
ProxyPreserveHost On
ProxyStatus On

This means that all requests starting with "/solr" will go to the solr server on port 8983. This allows us to make requests to the search server without violating the "same-origin" policy.

How the client-side portion of the search works

The page /search/index.html includes some javascript (in the file js/search.html) which in turn uses jQuery. The code parses the arguments in the URL and then makes an AJAX request to the SOLR server which returns a JSON string. The javascript code converts that to an object and then renders the search response page.

How to rebuild the search index

Note that you typically do not want to do this by hand as it is handled by cron jobs (see below).

NOTE: this may need debugging for staging.bioconductor.org

from transition from merlot2

On staging.bioconductor.org (ssh to staging.bioconductor.org):

cd ~/biocadmin/bioc-test-web/bioconductor.org
rake index_production  (see also rake search_index)

What this command does:

  • Runs a Ruby class which determines which files need to be (re)indexed.
  • This uses a cache file containing the names of each file and their modification times as of the last time the script was run. If the cache file does not exist, all files are indexed. This class also handles new files and deletions.
  • The class actually does not do the indexing itself; it creates another script (index.sh -- created by scripts/search_indexer.rb) which does the actual indexing, which is accomplished by using curl to post files to the SOLR web app.

To re-index files on master, ssh to staging (not master) and do this:

cd ~/biocadmin/bioc-test-web/bioconductor.org
rake index_production

Cron jobs for rebuilding the search index/why it is decoupled from site update

Doing "crontab -l" on staging shows how the index us updated on master. Here are the relevant lines:

# create search index:
30 */4 * * * cd $HOME/bioc-test-web/bioconductor.org && rake index_production > $HOME/bioc-test-web/production_index.log 2>&1

Notice that the search indexing process is decoupled from the site building process (which takes place every 30 minutes). Site indexing can be a time-consuming process (especially on master) and the site rebuilding should be quick. So the search indexing takes place once a day on staging at 8 pm on master (where there are many more files to be indexed which originate from the build system).

BiocViews Pages

The BiocViews pages are generated by a three-step process:

Step 0: Obtain manifest git repository

The manifest git repository is available from the Bioconductor git server. It contains a list of all current Bioconductor packages. Make sure that it is in the same folder as the bioconductor.org repository checkout. To clone it, first ensure appropriate access rights and then run:

git clone [email protected]:admin/manifest.git

Step 1: rake get_json

This is run by a cron job on staging every day at between 3-8 PM EST (presumably after the build system has finished and copied all its output to master). Here is the cron job:

*/15 15-20 * * * cd $HOME/bioc-test-web; ./get_json.sh > $HOME/bioc-test-web/get_json.log 2>&1

This Rake target runs some R code which calls code in the BiocViews package, extracting /packagesdata in JSON format and putting it in assets/packages/json. Then a ruby script is run which processes that JSON into a format usable by the javascript tree widget.

If you want to run this target on your own machine, you need R available with the biocViews (Bioconductor) and rjson (CRAN) packages installed.

Step 2: Build package detail pages

This is done by nanoc and handled by the DataSource subclass BiocViews (found in lib/data_sources/bioc_views.rb). This data source uses the JSON files generated in the previous step to build a single page for each page, one for release and one for devel. The pages are rendered by the partial layouts/_bioc_views_package_detail.html.

Step 3: The BiocViews Hierarchy page

At http://bioconductor.org/packages. This page uses javascript to build the tree, reading in data generated in step 1. The relevant Javascript file is assets/js/bioc_views.js. The automatically generated (by rake) file output/js/versions.js is also sourced.

Updating the site during a release

Take a look at the config.yaml file in the root of the bioconductor.org working copy. This should be the only place you need to make changes.

Standard Operating Procedures / SOPs / Troubleshooting

Problem: Web site does not seem to be updating

Symptom: Commits you made are not going through, and/or the dashboard (http://bioconductor.org/dashboard/) says that the site has not been updated in over 20 minutes. It likely means that an error was introduced in a recent commit. (make sure you haven't forgotten to git add any files).

Solution: ssh to [email protected] (ask Lori if you don't have permission to do so). Change directories:

cd ~/bioc-test-web

Look at the 2015 Oct 29 10:22:19 AM then its contents are relevant. You can also look at the last few lines of ./log/update_site.log.

Updating Ruby or Gems

See separate README.updatingRubyOrGems

bioconductor.org's People

Contributors

aedin avatar almahmoud avatar aoles avatar bp289 avatar csoneson avatar dependabot[bot] avatar grimbough avatar hpages avatar jananiravi avatar jbagga avatar jimhester avatar jorainer avatar jwokaty avatar kayla-morrell avatar kevinrue avatar kozo2 avatar lcolladotor avatar lgatto avatar link-ny avatar llrs avatar lshep avatar mariamm110 avatar mblue9 avatar mikelove avatar mritchie avatar mtmorgan avatar nturaga avatar rdshear avatar sagaaah avatar sociablesteve avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bioconductor.org's Issues

Error in Biostrings

Hi all,

Even in these horrific times of COVID-19 pandemic we try to focus on our research.

The DADA2 pipeline Tutorial was very useful but now I am struggling big time on the phyloseq part.
I just removed all Eukaryota en NA from my data set:
ps_EPSO1_noEuks_noUnk <- subset_taxa(ps_EPSO1, Kingdom!="Eukaryota" &
Phylum!=" ")

print(ps_EPSO1_noEuks_noUnk)
phyloseq-class experiment-level object
otu_table() OTU Table: [ 3455 taxa and 40 samples ]
sample_data() Sample Data: [ 40 samples by 13 sample variables ]
tax_table() Taxonomy Table: [ 3455 taxa by 6 taxonomic ranks ]
refseq() DNAStringSet: [ 3455 reference sequences ]

But I receive on my next step an error:

dna <- Biostrings::DNAStringSet(taxa_names(ps_EPSO1_noEuks_noUnk))
Error in .Call2("new_XStringSet_from_CHARACTER", class(x0), elementType(x0), :
key 51 (char '3') not in lookup table

If I look at my ps_EPSO1_noEuks_noUnk, my pay_tree is NULL. I have no idea how come.

Some help would be much appreciated!

BW
H.

Add a badge for GitHub/GitLab repository, and or issue tracker?

For a long time I was considering contributing to existing packages hosted on Bioconductor that I use every day. I believe in incremental improvements and would prefer to support the work of original creator rather than create a new package just because the existing one has bugs or imperfections.

However, I find it difficult to navigate the space from the developer point of view. As I just described in a tweet storm here, it is often difficult to know if a package has a GitHub/GitLab, and I believe that it is a valuable information. I see GitHub/GitLab (and other similar hubs) as extremely valuable, because in addition to hosting the git repository, they provide:

  • bug tracking (with the bug status clearly defined and easier to explore than searching thought the Bioconductor support forum)
  • other tools fostering collaboration, such as forking, pull-requests, automated actions
  • easy to use interface for code exploration

and finally, they encourage best code practices by allowing for easy integration with linters, security scanners and other automated code intelligence tools

I was thinking, if it would be possible to gently encourage the existing packages to create GitHub/GitLab or similar, and make navigation from one to to other easier?

GitHub and Bioconductor badges

One idea would be to provide a badge linking from a package website to the GitHub and a second one from the reposities README to the package website. The former could be autogenerated, and the latter could be promoted by encouraging adding it in tutorials for maintainers (it could use an existing solution, e.g. badger, though I would prefer a version which is published to be shown rather than the downloads number).

For examples let's look at the top 5 Bioconductor packages:

  1. BiocVersion: package, GitHub - no link from one to the other, either way
  2. BiocGenerics: package, GitHub - no link from package to GitHub
  3. S4Vectors: package, GitHub - no link from package to GitHub
  4. IRanges: package, GitHub - no link from package to GitHub
  5. BiocGenerics: package, GitHub - no link from package to GitHub

And at the top 30 packages which do not constitute the core infrastructure:

  1. zlibbioc - core
  2. AnnotationDbi - core
  3. XVector - core
  4. BiocParallel - core
  5. GenomeInfoDb - core
  6. DelayedArray - core
  7. GenomicRanges - core
  8. SummarizedExperiment - core
  9. limma: package, no GitHub/GitLab etc - in vacuum people either refer to gravely outdated CRAN limma mirror (13 years old version!) or create their own mirrors, e.g. gangwug/limma
  10. Biostrings - core
  11. Rsamtools - core
  12. biomaRt: package, GitHub - no link from one to the other, despite the issues tracker containing important information about the state of the package
  13. annotate - core
  14. genefilter - core
  15. GenomicAlignments - core
  16. Rhtslib - core
  17. graph - core
  18. rtracklayer: package, GitHub - no link from one to the other, either way; moreover the search also returns an older mirror from @mtmorgan which might confuse at first; the issue tracker contains useful information that the user should be aware of
  19. edgeR: package - no collaborative platform like GitHub nor GitLub, or I could not find any
  20. GenomicFeatures - core
  21. BiocFileCache - core
  22. DESeq2: package, GitHub - this is exceptional, because the maintainer use the URL fields in both GitHub and package description to create a superb experience linking the two pages together; moreover the maintainer explained when to post an issue on the GitHub repo, and when to ask a question on the Bioconductor uspport forum
  23. Rhdf5lib: package, GitHub - another great example; both URL and BugReports fields are utilised
  24. geneplotter - core
  25. rhdf5 (same as Rhdf5lib)

Having two badges one from package website to the GitHub repo and one the other way round would help greatly here!

Contributions friendly?

A variation of this proposal would be to have a badge saying "welcoming contributors" or "contributor friendly" on the Bioconductor package site; this would signify that the maintainer opted-in to provide the repository address and will consider PRs with bug fixes and improvements.

Issues count badge?

The final variation, is to have a badge showing the number of issues open. I believe that this is very important, because issues can be discovered after a release and users should know what are limitations of the package; they should not have to read through all the the support forum questions and answers to discover that there is a bug that changes the result - this is not what I would expect a typical user to do just after installing a package. However, should they have a badge saying [7 issues], they might be inclined to check that.

I would emphasise that this badge would have a different purpose forum the existing "posts" badge which counts the questions and answers on the support website; a popular package might have thousands of usage questions, but only a few bugs at any given time. It is not important for the integrity of the research that the users read the 1000s of usage questions, but it is that they are aware of the few bugs which might or might not affect their use case.

Apologies if this is not an appropriate place to post this idea.

WISH: Make it clear that Bioc 3.12 requires R (>= 4.0.0)

I find the R version requirement on https://bioconductor.org/install/:

The current release of Bioconductor is version 3.12; it works with R version 4.0.3. Users of older R and Bioconductor must update their installation ...

ambiguous, i.e. I read it as R (>= 4.0.3) was required. Per Bioconductor/BiocManager#82 (comment), R 4.0.0 should be sufficient. May I suggest to update the webpage to read:

The current release of Bioconductor is version 3.12; it requires R version 4.0.0 or newer where R version 4.0.3 is highly recommended. Users of older R and Bioconductor must update their installation ...

I see that "4.0.3" comes from r_version_associated_with_release in:

r_version_associated_with_release: "4.0.3"

Revise workflow instructions

Some of the instructions might not be valid anymore after the changes from the last few months. In particular, the approach described under Using Math Symbols in a Markdown workflow vignette is not necessary anymore.

Rank badges link to non-existing pages

For example, the 'rank' badge on https://www.bioconductor.org/packages/release/bioc/html/Biobase.html links to http://bioconductor.org/packages/stats/bioc/Biobase/, which gives a 403 error;

 curl --head http://bioconductor.org/packages/stats/bioc/Biobase/
HTTP/1.1 403 Forbidden
Content-Type: text/html
Content-Length: 7136
Connection: keep-alive
Date: Fri, 17 Mar 2023 17:50:05 GMT
Server: Apache/2.4.18 (Ubuntu)
Last-Modified: Fri, 17 Mar 2023 17:35:21 GMT
ETag: "1be0-5f71c0064afd6"
Accept-Ranges: bytes
X-Cache: Error from cloudfront
Via: 1.1 9525a1adf6d0a16da3bb7589fe5684a4.cloudfront.net (CloudFront)
X-Amz-Cf-Pop: SFO5-P1
X-Amz-Cf-Id: MSUo6N8UKdN1deNBfrjBJFKjH13oZcBkGHolqPq9ZMkhuUjhE94xrw==

Also, the badge link is an HTTP URL, although it's accessed from an HTTPS address.

Mention more time zones for the build reports

Hi, I never manage to wrap my mind around the bioc build farm timing.
I aim to submit a PR to https://github.com/Bioconductor/bioconductor.org/blob/master/content/developers/how-to/troubleshoot-build-report.md
which adds more time zones. I am open to opinions whether

  1. inline or as a small table with "pull snapshot / build report" x "time zone" and
  2. which time zone(s).

Question: Do you edit that page when you switch between EST <-> EDT ? Or is it always 14:30 New York time ?
Or does the reader have to do that DST offset in summer herself ? Or what is an hour between friends ?
I would like to add Europe (UTC and/or UTC+2 CET) and Asia (UTC+8 CST and/or UTC+9 JST).

Preview of the support site from main project site

Hi,
I'd have a quick feedback on the new support site (which looks so much better, well done!) - when checking from the Bioc homepage, the post type information generate “extra clutter”

image

i.e. I probably would not need to see Comment: Comment: or Answer: Answer: , but rather “the most” of each post title.
Is this unexpected or desired to be this way? Just thinking out loud, as maybe some changes are unwanted side effects after switching.

As @lshep pointed out in Slack,

this actually is more to do with how the main site gets the recent posts but will have to look further into it

This is mainly to make the original conversation less volatile 😃
Thanks for looking into it!
Federico

Getting 429 back from all requests to Bioconductor

Hello!

Whenever I make any request to Bioconductor, e.g.:

$ curl -O https://bioconductor.org/packages/3.13/bioc/bin/macosx/contrib/4.1/BiocVersion_3.13.1.tgz -vvv

I'm getting a 429 back:

< HTTP/1.1 429 Too Many Requests
< Server: CloudFront
< Date: Thu, 10 Mar 2022 16:29:06 GMT
< Content-Length: 0
< Connection: keep-alive
< X-Cache: Error from cloudfront
< Via: 1.1 040f8a2cdffe1cf7a35d28e06c3ed574.cloudfront.net (CloudFront)
< X-Amz-Cf-Pop: IAD89-P1
< X-Amz-Cf-Id: gr3Xb8ZBaIOIzj3lldhc3ReWYzOvZ7V98DdnGXDAOhSnHVGK6sNisw==

I was previously making requests to install packages on a schedule and am wondering if perhaps my IP was added to a block list?

Add Mac arm64 binaries to package landing pages

@lshep @jwokaty

With the Mac arm64 binaries now available in BioC 3.16, we need to discuss the impact on package landing pages.

The good news is that it looks like Jen already made the necessary adjustments in biocViews::genReposControlFiles() to list those binaries in the VIEWS file. See the mac.binary.big-sur-arm64.ver lines in https://bioconductor.org/packages/3.16/bioc/VIEWS 👍

So the scripts on staging would need to be modified to put the link to those binaries on the package landing pages. See for example https://bioconductor.org/packages/3.16/Biobase. The various types of packages are listed at the bottom of the page. The new binaries could be labelled as "macOS binary (arm64)" for consistency with what the CRAN folks do on their own package landing pages (see for example https://cran.r-project.org/package=car). Also note that, right now, we label the Mac Intel binaries as "macOS 10.13 (High Sierra)" but we should probably re-label them as "macOS binary (x86_64)" for consistency with CRAN.

One remaining caveat is what happens to the "build" badge. I don't know the details but I suspect that the script on staging consults the BUILD_STATUS_DB.txt file to infer the overall "build" status of a package. Problem is that the arm64 builds are their own separate builds at the moment (i.e. they're not part of the daily builds), with their own schedule (they run twice a week only). This might change in the future when we have enough Mac arm64 computing power to keep up with the pace of the daily builds. In the meantime, this means that the build status for kjohnson is not included in the BUILD_STATUS_DB.txt file produced by the daily builds. Instead it's in its own BUILD_STATUS_DB.txt file here: https://bioconductor.org/checkResults/3.16/bioc-mac-arm64-LATEST/BUILD_STATUS_DB.txt Of course there are ways to handle this but...

The easy solution is to not do anything about it and live with the fact that an arm64 failure won't have any effect on the "build" badge 😉. Note that, even if we were able to generate the correct "build" badge (i.e. a badge that also considers arm64 build status), clicking on the badge would still take you to the daily build report where the results for arm64 are not displayed. It would be very confusing to display a red "build" badge on the package landing page, because of an arm64 failure, and to end up on a build report where everything is green!

So probably not a big deal to ignore the arm64 build status for the "build" badge for now. It's a temporary situation anyways.

Let me guys know how I can help.

Thanks,
H.

SSL peer certificate or SSH key is not OK

Got a warning about

URL 'https://bioconductor.org/config.yaml': status was 'SSL peer certificate or SSH remote key was not OK'

If I go to the url I see:

Screenshot from 2019-10-03 22-48-10

In addition, I see this :

> BiocManager::valid()
Warning: unable to access index for repository https://bioconductor.org/packages/3.9/bioc/src/contrib:
  cannot open URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/PACKAGES'
Warning: unable to access index for repository https://bioconductor.org/packages/3.9/data/annotation/src/contrib:
  cannot open URL 'https://bioconductor.org/packages/3.9/data/annotation/src/contrib/PACKAGES'
Warning: unable to access index for repository https://bioconductor.org/packages/3.9/data/experiment/src/contrib:
  cannot open URL 'https://bioconductor.org/packages/3.9/data/experiment/src/contrib/PACKAGES'
Warning: unable to access index for repository https://bioconductor.org/packages/3.9/workflows/src/contrib:
  cannot open URL 'https://bioconductor.org/packages/3.9/workflows/src/contrib/PACKAGES'

Text on labels is wider than the image

I noted today that there are some reading problems with the labels on the landing pages:labels of a package

I could see in both Firefox in Ubuntu and in Chrome in Windows. Probably it is related to the badge reorganization in this commit

update support site / post link

Check post badge. Does not appear to be linking to correct end point API?

There was also discussion on the team to change the display of this badge. Did we want to pursue this?

Add `Edit on GitHub` buttons to every page

It would be great to have an Edit on GitHub button on each page so that users can contribute to the content of the website.

It can be a button or link and it can look like this:

Edit this page

where it points to https://github.com/Bioconductor/bioconductor.org/edit/master/content/developers/package-guidelines.md

Case sensitiveness in URLs

Bioconductor.org is the only website I know that has case-sensitive URLs.

For example:

this link works, because it has the capital B for naming the html file.

this one doesn't (as in page not found, custom 404 message).

config.yaml: (Bioconductor, R) mapping for Bioconductor 1.0-1.5?

The config.yaml file specifies (Bioconductor, R) mappings for Bioconductor (>= 1.6);

r_ver_for_bioc_ver:
"1.6": "2.1"
"1.7": "2.2"

However, https://github.com/Bioconductor/bioconductor.org/blob/master/content/about/release-announcements.md provides such mappings from Bioconductor 1.0:

Release Date Software packages R
1.6 May 18, 2005 123 2.1
1.5 October 25, 2004 100 2.0
1.4 May 17, 2004 81 1.9
1.3 October 30, 2003 49 1.8
1.2 May 29, 2003 30 1.7
1.1 November 19, 2002 20 1.6
1.0 May 1, 2002 15 1.5

Is there a reason for the latter being left out from the config.yaml file? If not, I can do a PR if you'd like.

Advertising closed positions

The home page is advertising some positions that are filled (I hope, because they link to a page that says: "We're sorry! The page you are looking for is not available.").
The relevant lines are here:

<li><strong>Core team job opportunities
for <a href="https://www.roswellpark.org/careers/administrative/programmeranalyst-5817">scientific
programmer / analyst</a>
and <a href="https://www.roswellpark.org/careers/administrative/senior-programmeranalyst-5656">senior
programmer / analyst</a>!</strong> </li>

https:// pages sometimes redirect to http://

this might be a little vague, but if I am on for instance https://bioconductor.org/packages/BiocParallel and click on one of the badges, e.g., "build", I end up at http://bioconductor.org/checkResults/release/bioc-LATEST/BiocParallel/ but should end up at the equivalent but with https://.

This does not seem to happen when I click on other links on the page, e.g., on the links in the 'Documentation' section.

This was mentioned in the community slack https://community-bioc.slack.com/archives/C35G93GJH/p1683122919846559 although I don't know if the original or only cause is clicking on the buttons.

bioclite

Hi,

Do you have to reinstall bioclite every single time you open rstudio? I ask this because that is what is happening to me. Is there a way to install bioclite permanently? Thank you!

"docker run -v /<full_path>/... " fails

Build using Docker fails as below.
Please let me know if you have any ideas for solving this.

cd ~
git clone https://github.com/Bioconductor/bioconductor.org
docker run -v /home/kozo2/bioconductor.org:/bioconductor.org/ -p 3000:3000 bioconductor/website:latest
copying bioc_views.html to content/packages/3.12/BiocViews.html
copying bioc_views.html to content/packages/3.13/BiocViews.html
Loading site… done
kramdown warning(s) for <Nanoc::CompilationItemRepView item.identifier=/developers/how-to/git/new-package-workflow/ name=default>
  No link definition for link ID '1' found on line 7
kramdown warning(s) for <Nanoc::CompilationItemRepView item.identifier=/developers/how-to/troubleshoot-build-report/ name=default>
  No link definition for link ID 'example [biobase](https://bioconductor.org/packages/release/bioc/html/biobase.html)' found on line 16
#<Thread:0x0000561e42d988c8@/usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:14 run> terminated with exception (report_on_exception is true):
/usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `initialize': No such file or directory @ rb_sysopen - output/style/style.css (#<Thread:0x0000561e42d9aa88@/usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:14 run> terminated with exception (report_on_exception is true):
/usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `initialize': kramdown warning(s) for <Nanoc::CompilationItemRepView item.identifier=/news/bioc_3_9_release/ name=default>
  No link definition for link ID 'fl' found on line 2781
  No link definition for link ID 'flwy' found on line 2781
  No link definition for link ID 'from @vlakam' found on line 4658
Errno::ENOENTNo such file or directory @ rb_sysopen - output/about/removed-packages/index.html)  No link definition for link ID 'msdata::spectrum::getmzintensitypairs()' found on line 6041
  No link definition for link ID 'a-z' found on line 6957
  No link definition for link ID '5' found on line 7238
  No link definition for link ID 'i' found on line 7722
  Found no end tag for 'NON_REF' (line 7975) - auto-closing it
  No link definition for link ID '2019-03-27' found on line 8476
  No link definition for link ID '2019-01-04' found on line 8492
#<Thread:0x0000561e42d99e30@/usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:14 run> terminated with exception (report_on_exception is true):
 (
Errno::ENOENT)
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
  No link definition for link ID '2018-12-07' found on line 8498
  No link definition for link ID '2018-12-07' found on line 8501
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
  No link definition for link ID '2018-12-07' found on line 8502
  No link definition for link ID '2018-12-07' found on line 8504
  No link definition for link ID '2018-11-07' found on line 8510
/usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `initialize': No such file or directory @ rb_sysopen - output/about/scientific-advisory-board/index.html (Errno::ENOENT)      from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `block in copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `open'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1557:in `block in fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1573:in `fu_each_src_dest0'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1555:in `fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:418:in `cp'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:51:in `write_single'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:18:in `block in write'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `write'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:12:in `block in write_all'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `write_all'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:21:in `block in start'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `block in copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `open'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1557:in `block in fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1573:in `fu_each_src_dest0'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1555:in `fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:418:in `cp'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:51:in `write_single'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:18:in `block in write'

        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `block in copy_file'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `open'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `copy_file'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `write'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:12:in `block in write_all'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `write_all'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:21:in `block in start'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1557:in `block in fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1573:in `fu_each_src_dest0'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:1555:in `fu_each_src_dest'
        from /usr/local/lib/ruby/2.6.0/fileutils.rb:418:in `cp'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:51:in `write_single'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:18:in `block in write'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:17:in `write'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:12:in `block in write_all'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `each'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/item_rep_writer.rb:11:in `write_all'
        from /usr/local/bundle/gems/nanoc-4.9.0/lib/nanoc/base/services/compiler/phases/write.rb:21:in `block in start'

Captain! We’ve been hit!

Errno::ENOENT: No such file or directory @ rb_sysopen - output/style/style.css

  0. /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `initialize'
  1. /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
  2. /usr/local/lib/ruby/2.6.0/fileutils.rb:1386:in `block in copy_file'
  3. /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `open'
  4. /usr/local/lib/ruby/2.6.0/fileutils.rb:1385:in `copy_file'
  5. /usr/local/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
  6. /usr/local/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
  7. /usr/local/lib/ruby/2.6.0/fileutils.rb:1557:in `block in fu_each_src_dest'
  8. /usr/local/lib/ruby/2.6.0/fileutils.rb:1573:in `fu_each_src_dest0'
  9. /usr/local/lib/ruby/2.6.0/fileutils.rb:1555:in `fu_each_src_dest'
  ... 9 lines omitted (see crash.log for details)

A detailed crash log has been written to ./crash.log.

http://www.bioconductor.org/developers/how-to/useDevel/ still references BiocInstaller

I was making changes requested by @LiNk-NY in https://stat.ethz.ch/pipermail/bioc-devel/2018-July/013815.html. In updating the GenomicTuples I saw that I linked to http://www.bioconductor.org/developers/how-to/useDevel/ for instructions on using the devel branch, but these still reference BiocInstaller rather than BiocManager. Is there a new link I should be using with updated instructions or will this page be updated?

Add a license

Hey folks! It would be great to have a license here so folks know under what conditions their contributions fall under.

GitHub actions and styler suggestions

Hi,

Based on my recent work for r-lib/actions#84, I want to make a few small PRs to the BioC website. The feasibility of the PRs for the Bioc website also depend on r-lib/usethis#1108, r-lib/styler#636 and Bioconductor/BiocCheck#57.

Assuming things go well with r-lib/actions, r-lib/usethis, r-lib/styler and Bioconductor/BiocCheck, I think that it would be useful to add a help page like https://www.bioconductor.org/help/docker/ to the Bioconductor website that illustrates how you can setup a GitHub action workflow like https://github.com/leekgroup/derfinderPlot/blob/master/.github/workflows/check-bioc.yml on your package. So one PR would involve adding such a file.

The other PR would be related to styler and is directly related to Bioconductor/BiocCheck#57. It would involve editing http://www.bioconductor.org/developers/how-to/coding-style/ to mention styler.

Best,
Leo

Is there a packages.rds file like CRAN's https://cran.r-project.org/web/packages/packages.rds

I maintain groundhog, a package that brings version control to R package handling. I am about to launch v2.0.0 which will add github and gitlab packages, but it still does not work with bioconductor packages. I plan on adding this capabiliy on the next major release. To do so it would be useful to have a database with all packages and their dependencies on each version on bioconductor. Something like https://cran.r-project.org/web/packages/packages.rds does such a file exist? I guess it is possible to scrape this information from the site, but seems much better to simply download the file if it exist. Thanks,

Uri

http://groundhogr.com

broken link

https://bioconductor.org/developers/package-submission/#annPackage

includes broken link
at

Annotation Packages
Annotation packages contain lightly or non-curated data from a public source and are updated with each Bioconductor release (every 6 months). They are a source of general annotation for one or many organisms and are not specific to a particular experiment. When possible, they should support the select() interface from AnnotationDbi.

Annotation packages should NOT be posted to the tracker repository. Instead send an email to [email protected] with a description of the proposed annotation package and futher instructions of where to send the package will be provided. Whenever possible Annotation Packages should use the AnnotationHub for managing files.

the reference to AnnotationHub in last line is the broken

https://bioconductor.org/packages/devel/bioc/vignettes/AnnotationHub/inst/doc/CreateAnAnnotationPackage.html

Search results not clear enough

Whenever someone searches for a package the results are somewhat overwhelming... the different releases could use of some visual help. Right now you see the actual URLs. IDK, maybe using badges in the results? Most users won't visually parse the actual URLs! I mean, it takes some experience to do it. At least that's what happened to me during my first months looking for search results and ending in "devel" pages, hehe.

as an example:

Your search for biobase returned 11282 results.

Bioconductor - Biobase - /packages/release/bioc/html/Biobase.html
Bioconductor - Biobase
Biobase - /packages/bioc/1.6/src/contrib/html/Biobase.html
Biobase
Biobase - /packages/bioc/1.7/src/contrib/html/Biobase.html
Biobase
Biobase - /packages/2.3/bioc/html/Biobase.html
Biobase
Biobase - /packages/2.4/bioc/html/Biobase.html
Biobase
Bioconductor - Biobase (development version) - /packages/devel/bioc/html/Biobase.html
Bioconductor - Biobase (development version)
Bioconductor - Biobase - /packages/2.14/bioc/html/Biobase.html
Bioconductor - Biobase
Bioconductor - Biobase - /packages/2.11/bioc/html/Biobase.html
Bioconductor - Biobase
Bioconductor - Biobase - /packages/2.13/bioc/html/Biobase.html
Bioconductor - Biobase
Bioconductor - Biobase - /packages/2.12/bioc/html/Biobase.html
Bioconductor - Biobase
Bioconductor - Biobase - /packages/3.1/bioc/html/Biobase.html
Bioconductor - Biobase

Could be better be parsed by the server, instead of giving that task to the user. Like returning a sorted list of unique packages. In this case, only biobase (for the shown part) with buttons to each release..?

Anyway, I'm just opening the issue to start the discussion. My point is, this could use of some extra ease for newcomers.

Building package landing pages locally

Is it possible to build the package landing pages locally?

I've read the instructions at https://github.com/Bioconductor/bioconductor.org#step-1-rake-get_json but I get stuck on step 1 where there is no get_json.sh script in the repository.

Ideally I'd do this with the Docker image, which works great for testing the rest of the site content, but I'll take any mechanism.

The reason I'm asking is that I'd like to try adding the code browser links to the landing page e.g. https://code.bioconductor.org/browse/BiocStyle to increase the visibility of the service. However I don't think I'll get very far without some local testing of the code.

Of course, if we don't think it's a good idea to have those links then I'll stop trying!

Question badge link should be updated

The badges linking to a tag on support.bioconductor.org should update their scheme since the update of the support site.
From: https://support.bioconductor.org/t/omixer/ to https://support.bioconductor.org/tag/omixer/.
The first one is valid but returns a mixed set of questions while the new one is just for said tag.

Not sure where the template it otherwise I would directly sent a PR.

Watched tags overruns character limit

I need to watch a lot of packages:

edger, chipseq, hic, chiapet, csaw, limma, voom, diffhic, scran, scater, interactionset, cydar, beachmat, singlecellexperiment, dropletutils, biocneighbors, simplesinglecell

... which leads to the error message:

Please shorten this text to 100 characters or less (you are currently using 173 characters).

Probably a 10-fold increase in the number of allowable characters would be sufficient.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.