Compare commits

...

18 Commits

Author SHA1 Message Date
R. Tyler Croy cf37ecad45
Relocate the Pipeline documentation to the appropriate section of the handbook 2016-09-30 12:17:06 -07:00
R. Tyler Croy b651250fad
Stylize the inter-section links for the handbook
This commit also tidies up the table of contents styling too
2016-09-30 12:16:27 -07:00
Liam Newman b55bd5858b
Fix build 2016-09-30 11:27:15 -07:00
R. Tyler Croy f637b7e559
Merge remote-tracking branch 'upstream/master' into chapters 2016-09-30 08:52:04 -07:00
R. Tyler Croy e8ec0ed2b7
Merge remote-tracking branch 'upstream/master' into chapters 2016-09-26 14:41:19 -07:00
R. Tyler Croy bdc14fae7d
WIP: Creating the Security chapter in Operating Jenkins 2016-09-26 14:41:09 -07:00
Liam Newman fa696e1704 Fixes for PR feedback 2016-09-02 13:24:54 -07:00
Liam Newman 4212a2201e Installing Jenkins 2016-09-02 13:24:54 -07:00
R. Tyler Croy 30cb8cce38 Migrate the remainder of the chapter documents to the right template 2016-09-02 13:24:54 -07:00
R. Tyler Croy ce7804dacf Don't render the Sub-sections header if we don't have sections 2016-09-02 13:24:54 -07:00
R. Tyler Croy 4f9a18622f Move the 'Pipeline' doc to a chapter 2016-09-02 13:24:54 -07:00
R. Tyler Croy 0b3306d3b2 Add the outline for the 'Manage Jenkins' chapter 2016-09-02 13:24:54 -07:00
R. Tyler Croy bd7ed449a4 Add the outline for the 'Jenkins for X' chapter 2016-09-02 13:24:54 -07:00
R. Tyler Croy a9e72ee817 Add the outline of the 'Operating Jenkins' chapter 2016-09-02 13:24:54 -07:00
R. Tyler Croy d51d3b2ccf Chapter navigation (#330)
* Add placeholder documents for the various chapters @bitwiseman and I are thinking about

Work in progress document here: <http://etherpad.osuosl.org/cMWlI1hxn6>

* Move the chapters structure around to encourage sane management of chapter subsections

* Restructure the navigation and management of handbook chapters and sections

This is kind of a big commit but really all it does is help generate an
in-memory representation of the handbook documentation tree in order for some of
the newly created templates (chapter, section) to navigate betwixt themselves

* Move the book "home page"
2016-09-02 13:24:54 -07:00
R. Tyler Croy 0923f823c6 Chapter stubs (#329)
* Add placeholder documents for the various chapters @bitwiseman and I are thinking about

Work in progress document here: <http://etherpad.osuosl.org/cMWlI1hxn6>

* Move the chapters structure around to encourage sane management of chapter subsections
2016-09-02 13:24:54 -07:00
R. Tyler Croy e22e310163 Sort chapters on /doc/ in their order 2016-09-02 13:24:54 -07:00
R. Tyler Croy 2b237eaeb5 Upgrade awestruct-ibeams which provides more asciidoc richness in sections_from() 2016-09-02 13:24:54 -07:00
51 changed files with 1652 additions and 804 deletions

View File

@ -11,7 +11,7 @@ source 'https://rubygems.org'
# <https://github.com/jenkins-infra/jenkins.io/blob/master/CONTRIBUTING.adoc#advanced-building>
gem 'awestruct', '~> 0.5.7'
gem 'awestruct-ibeams', '~> 0.2.1'
gem 'awestruct-ibeams', '~> 0.3'
gem 'faraday', '~> 0.9.2'

View File

@ -46,7 +46,7 @@ dependencies {
/* used for legacy markdown template processing via tilt */
asciidoctor 'rubygems:kramdown:[1.9.0,2.0)'
/* contains a number of useful awestruct extensions */
asciidoctor 'rubygems:awestruct-ibeams:[0.2.1,0.3.0)'
asciidoctor 'rubygems:awestruct-ibeams:[0.3.0,)'
/* 1.2.0 causes 'C extensions are not supported' error. It's known that
JRuby's C extensions are deprecated */
asciidoctor 'rubygems:eventmachine:[1.0, 1.2)'

78
content/_ext/handbook.rb Normal file
View File

@ -0,0 +1,78 @@
require 'asciidoctor'
require 'yaml'
module Awestruct
module IBeams
class Section
attr_accessor :title, :file, :asciidoc, :key
end
class Chapter
attr_accessor :number, :sections, :title,
:file, :asciidoc, :key
def initialize
@sections = []
end
end
class Handbook
attr_accessor :book_dir, :chapters
def initialize(book_dir)
@book_dir = book_dir
@chapters = {}
end
def chapter_for_path(path)
if @chapters_map.nil?
@chapters_map = @chapters.values.map { |c| [c.key, c] }.to_h
end
return @chapters_map[path]
end
end
class HandbookExtension
def initialize(book_dir)
@book_dir = book_dir
end
def execute(site)
unless site.handbook
site.handbook = Handbook.new(@book_dir)
end
# We need a map of source files to their Awestruct::Page to avoid
# re-rendering things too much in the process of generating this page
pagemap = site.pages.map { |p| [p.source_path, p] }.to_h
chapters = Dir[File.join(@book_dir, "/**/chapter.yml")]
chapters.map { |c| [c, YAML.load(File.read(c))] }.each do |file, yaml|
dir = File.dirname(file)
overview = File.join(dir, 'index.adoc')
# Without an overview page, there's no point in attempting to add
# this to the structure
next unless File.exists? overview
chapter = Chapter.new
chapter.number = yaml['chapter']
chapter.title = pagemap[overview].title
chapter.key = File.basename(dir)
if sections = yaml['sections']
sections.each do |s|
section = Section.new
section.key = s
full_path = File.join(dir, "#{s}.adoc")
section.title = pagemap[full_path].title
chapter.sections << section
end
end
site.handbook.chapters[chapter.number] = chapter
end
end
end
end
end

View File

@ -41,6 +41,8 @@ Awestruct::Extensions::Pipeline.new do
extension SolutionPage.new
extension Releases.new
extension Awestruct::IBeams::HandbookExtension.new(File.expand_path(File.dirname(__FILE__) + '/../doc/book'))
transformer VersionSwitcher.new
helper ActiveNav

View File

@ -0,0 +1,28 @@
---
layout: default
---
.container
.row.body
%h1
= page.title
= content
- chapter_key = File.basename(File.dirname(page.source_path))
- chapter = site.handbook.chapter_for_path(chapter_key)
- if chapter.sections && chapter.sections.size > 0
%h2
Sub-sections
%ul
- chapter.sections.each do |section|
%li
%a{:href => section.key}
= section.title
.section_nav.page-link
%a{:href => '..'}
Index

View File

@ -0,0 +1,46 @@
---
layout: default
---
:ruby
section_key = File.basename(page.source_path, File.extname(page.source_path))
chapter_key = File.basename(File.dirname(page.source_path))
chapter = site.handbook.chapter_for_path(chapter_key)
our_index = chapter.sections.find_index { |s| s.key == section_key }
total = chapter.sections.size
# We need to do all this silly arithmetic in order to determine what the
# indices for the previous and/or next sections in this chapter would be
prev_section = nil
if our_index > 0
prev_section = our_index - 1
end
next_section = nil
if our_index < (total - 1)
next_section = our_index + 1
end
.container
.row.body
%h1
= page.title
= content
.section_nav.pagination-links
- unless prev_section.nil?
- section = chapter.sections[prev_section]
%a.prev.page-link{:href => File.join('..', section.key)}
Previous ("#{section.title}")
%a.up.page-link{:href => '../'}
Up ("#{chapter.title}")
- unless next_section.nil?
- section = chapter.sections[next_section]
%a.next.page-link{:href => File.join('..', section.key)}
Next ("#{section.title}")

View File

@ -1328,7 +1328,11 @@ blockquote {
.toc {
margin-bottom: 1rem;
background-color: #FFF;
background-color: #f9f9f9;
border-left: 1px solid #c9c9c9;
float: right;
margin-left: 15px;
padding: 10px;
}
.toc li li {

View File

@ -1,20 +0,0 @@
---
layout: simplepage
title: "Jenkins Handbook"
---
- book_dir = File.expand_path(File.dirname(__FILE__) + '/book')
%div.toc
%ul
- sections_from(book_dir).each do |section, file, url, children|
/
This section provided by the following document:
= File.basename(file)
%li
%a{:href => url}
= section
%ul
- children.each do |section, file, sect_url, sect_children|
%li
%a{:href => "#{url}##{sect_url}"}
= section

View File

@ -0,0 +1,4 @@
---
chapter: 100
sections:
- installing

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Appendix
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author: Liam Newman
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Advanced Jenkins Installation
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,3 @@
---
chapter: 4
sections:

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Best Practices
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,4 @@
---
chapter: 1
sections:
- installing

View File

@ -0,0 +1,17 @@
---
layout: chapter
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Getting Started with Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,177 @@
---
layout: section
---
:notitle:
:description:
:author: Liam Newman
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Installing Jenkins
[NOTE]
====
This is still very much a work in progress
====
[IMPORTANT]
====
This section is part of _Getting Started_.
It provides instructions for *basic* Jenkins configuration on a number of platforms.
It *DOES NOT* cover the full range of considerations or options for installing Jenkins.
See link:/doc/book/appendix/advanced-installation/[Advanced Jenkins Installation]
====
== Overview
== Pre-install
=== System Requirements
[WARNING]
====
These are *starting points*.
For a full discussion of factors see link:/doc/book/hardware-recommendations/[Discussion of hardware recommendations].
====
Minimum Recommended Configuration:
* Java 7
* 256MB free memory
* 1GB+ free disk space
Recommended Configuration for Small Team:
* Java 8
* 1GB+ free memory
* 50GB+ free disk space
=== Experimentation, Staging, or Production?
How you configure Jenkins will differ significantly depending on your intended use cases.
This section is specifically targeted to initial use and experimentation.
For other scenarios, see link:/doc/book/appendix/advanced-installation/[Advanced Jenkins Installation].
=== Stand-alone or Servlet?
Jenkins can run stand-alone in it's own process using it's own web server.
It can also run as one servlet in an existing framework, such as Tomcat.
This section is specifically targeted to stand-alone install and execution.
For other scenarios, see link:/doc/book/appendix/advanced-installation/[Advanced Jenkins Installation]
== Installation
[WARNING]
====
These are *clean install* instructions for *non-production* environments.
If you have a *non-production* Jenkins server already running on a system and want to upgrade, see link:/doc/book/getting-started/upgrading/[Upgrading Jenkins].
If you are installing or upgrading a production Jenkins server, see link:/doc/book/appendix/advanced-installation/[Advanced Jenkins Installation].
====
=== Unix/Linux
==== Debian/Ubuntu
On Debian-based distributions, such as Ubuntu, you can install Jenkins through `apt`.
Recent versions are available in link:https://pkg.jenkins.io/debian/[an apt repository]. Older but stable LTS versions are in link:https://pkg.jenkins.io/debian-stable/[this apt repository].
[source,bash]
----
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins
----
This package installation will:
* Setup Jenkins as a daemon launched on start. See `/etc/init.d/jenkins` for more details.
* Create a '`jenkins`' user to run this service.
* Direct console log output to the file `/var/log/jenkins/jenkins.log`. Check this file if you are troubleshooting Jenkins.
* Populate `/etc/default/jenkins` with configuration parameters for the launch, e.g `JENKINS_HOME`
* Set Jenkins to listen on port 8080. Access this port with your browser to start configuration.
[NOTE]
====
If your `/etc/init.d/jenkins` file fails to start Jenkins, edit the `/etc/default/jenkins` to replace the line
`----HTTP_PORT=8080----` with `----HTTP_PORT=8081----`
Here, "8081" was chosen but you can put another port available.
====
=== OS X
To install from the website, using a package:
* link:http://mirrors.jenkins.io/osx/latest[Download the latest package]
* Open the package and follow the instructions
Jenkins can also be installed using `brew`:
[source,bash]
----
brew cask install jenkins
----
=== Windows
To install from the website, using the installer:
* link:http://mirrors.jenkins.io/windows/latest[Download the latest package]
* Open the package and follow the instructions
=== Docker
You must have [Docker|http://docker.io] properly installed on your machine.
See the link:https://www.docker.io/gettingstarted/#h_installation[Docker installation guide] for details.
First, pull the official link:https://hub.docker.com/_/jenkins/[jenkins] image from Docker repository.
[source,bash]
----
docker pull jenkins
----
Next, run a container using this image and map data directory from the container to the host; e.g in the example below {{/var/jenkins_home}} from the container is mapped to {{jenkins/}} directory from the current path on the host. Jenkins {{8080}} port is also exposed to the host as {{49001}}.
[source,bash]
----
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
----
=== Other
See link:/doc/book/appendix/advanced-installation/[Advanced Jenkins Installation]
== Post-install (Setup Wizard)
=== Create Admin User and Password for Jenkins
Jenkins is initially configured to be secure on first launch.
Jenkins can no longer be accessed without a username and
password and open ports are limited. During the initial run of
Jenkins a security token is generated and printed in the console
log:
----
*************************************************************
Jenkins initial setup is required. A security token is required to proceed.
Please use the following security token to proceed to installation:
41d2b60b0e4cb5bf2025d33b21cb
*************************************************************
----
The install instructions for each of the platforms above includes the default location for when you can find this log output.
This token must be entered in the "Setup Wizard" the first time you open the Jenkins UI.
This token will also serve as the default password for the user 'admin' if you skip the user-creation step in the Setup Wizard.
=== Initial Plugin Installation
The Setup Wizard will also install the initial plugins for this Jenkins server.
The recommended set of plugins available are based on the most common use cases.
You are free to add more during the Setup Wizard or install them later as needed.

View File

@ -0,0 +1,23 @@
---
layout: simplepage
title: "Jenkins Handbook"
---
%ol
- site.handbook.chapters.keys.sort.each do |c|
- chapter = site.handbook.chapters[c]
%li
%a{:href => chapter.key}
%h4
= chapter.title
%ol
- chapter.sections.each do |section|
%li
%a{:href => File.join(chapter.key, section.key)}
%h6
= section.title
.page-link
%a{:href => '/doc'}
Up to all documentation

View File

@ -0,0 +1,7 @@
---
chapter: 6
sections:
- dotnet
- java
- python
- ruby

View File

@ -0,0 +1,17 @@
---
layout: section
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Jenkins with .NET
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Jenkins for X
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,17 @@
---
layout: section
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Jenkins with Java
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,21 @@
---
layout: section
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Jenkins with Python
[NOTE]
====
This is still very much a work in progress
====
== Test Reports

View File

@ -0,0 +1,24 @@
---
layout: section
---
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
:notitle:
= Jenkins with Ruby
[NOTE]
====
This is still very much a work in progress
====
== Test Reports
== Coverage Reports
== Rails

View File

@ -0,0 +1,8 @@
---
chapter: 3
sections:
- plugins
- system-configuration
- security
- nodes
- tools

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Manage Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,17 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Nodes
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Plugins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,17 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Security
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,17 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= System Configuration
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,18 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Tools
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Backing-up/Restoring Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,8 @@
---
chapter: 7
sections:
- backing-up
- monitoring
- security
- with-chef
- with-puppet

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Operating Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Monitoring Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -1,44 +1,62 @@
---
layout: simplepage
layout: section
---
:doctitle: Securing Jenkins
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:imagesdir: /doc/book/resources/securing-jenkins
:sectanchors:
:toc: left
== Securing Jenkins
In the default configuration of Jenkins 1.x, Jenkins does not perform any security checks. This means the ability of Jenkins to
launch processes and access local files are available to anyone who can access Jenkins web UI and some more.
= Securing Jenkins
[NOTE]
====
This is still very much a work in progress
====
In the default configuration of Jenkins 1.x, Jenkins does not perform any
security checks. This means the ability of Jenkins to launch processes and
access local files are available to anyone who can access Jenkins web UI and
some more.
Securing Jenkins has two aspects to it.
* Access control, which ensures users are authenticated when accessing Jenkins and their activities are authorized.
* Access control, which ensures users are authenticated when accessing Jenkins
and their activities are authorized.
* Protecting Jenkins against external threats
== Access Control
You should lock down the access to Jenkins UI so that users are authenticated and appropriate set of permissions are given to them. This setting is controlled mainly by two axes:
* *Security Realm*, which determines users and their passwords, as well as what groups the users belong to.
You should lock down the access to Jenkins UI so that users are authenticated
and appropriate set of permissions are given to them. This setting is
controlled mainly by two axes:
* *Security Realm*, which determines users and their passwords, as well as what
groups the users belong to.
* *Authorization Strategy*, which determines who has access to what.
These two axes are orthogonal, and need to be individually configured. For example, you might choose to use external LDAP or Active Directory as the security realm, and you might choose "everyone full access once logged in" mode for authorization strategy. Or you might choose to let Jenkins run its own user database, and perform access control based on the permission/user matrix.
These two axes are orthogonal, and need to be individually configured. For
example, you might choose to use external LDAP or Active Directory as the
security realm, and you might choose "everyone full access once logged in" mode
for authorization strategy. Or you might choose to let Jenkins run its own user
database, and perform access control based on the permission/user matrix.
The following pages discuss various aspects of this feature in details:
* https://wiki.jenkins-ci.org/display/JENKINS/Quick+and+Simple+Security[Quick and Simple Security] --- if you are running Jenkins like `java -jar jenkins.war` and only need a very simple set up
* https://wiki.jenkins-ci.org/display/JENKINS/Standard+Security+Setup[Standard Security Setup] --- discusses the most common set up of letting Jenkins run its own user database and do finer-grained access control
* https://wiki.jenkins-ci.org/display/JENKINS/Apache+frontend+for+security[Apache frontend for security] --- run Jenkins behind Apache and perform access control in Apache instead of Jenkins
* https://wiki.jenkins-ci.org/display/JENKINS/Authenticating+scripted+clients[Authenticating scripted clients] --- if you need to programatically access security-enabled Jenkins web UI, use BASIC auth
* https://wiki.jenkins-ci.org/display/JENKINS/Disable+security[Help\! I locked myself out\!|Disable security] --- if something goes really wrong and you can't get full access anymore
* https://wiki.jenkins-ci.org/display/JENKINS/Matrix-based+security[Matrix-based security|Matrix-based security] --- Granting and denying finer-grained permissions
== Protect users of Jenkins from other threats
There are additional security subsystems in Jenkins that protect Jenkins and users of Jenkins from indirect attacks.
The following topics discuss features that are *off by default*. We recommend you read them first and act on them.
There are additional security subsystems in Jenkins that protect Jenkins and
users of Jenkins from indirect attacks.
The following topics discuss features that are *off by default*. We recommend
you read them first and act on them.
* https://wiki.jenkins-ci.org/display/JENKINS/CSRF+Protection[CSRF Protection] --- prevent a remote attack against Jenkins running inside your firewall
* https://wiki.jenkins-ci.org/display/JENKINS/Security+implication+of+building+on+master[Security implication of building on master] --- protect Jenkins master from malicious builds
@ -48,3 +66,23 @@ The following topics discuss other security features that are on by default. You
* https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy[Configuring Content Security Policy] --- protect users of Jenkins from malicious builds
* https://wiki.jenkins-ci.org/display/JENKINS/Markup+formatting[Markup formatting] --- protect users of Jenkins from malicious users of Jenkins
== Disabling Security
One may accidentally set up security realm / authorization in such a way that
you may no longer able to reconfigure Jenkins.
When this happens, you can fix this by the following steps:
. Stop Jenkins (the easiest way to do this is to stopthe servlet container.)
. Go to `$JENKINS_HOME` in the file system and find `config.xml` file.
. Open this file in the editor.
. Look for the `<useSecurity>true</useSecurity>` element in this file.
. Replace `true` with `false`
. Remove the elements `authorizationStrategy` and `securityRealm`
. Start Jenkins
. When Jenkins comes back, it will be in an unsecured mode where everyone gets full
access to the system.
If this is still not working, trying renaming or deleting `config.xml`.

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Jenkins with Chef
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,16 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Managing Jenkins with Puppet
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,5 @@
---
chapter: 5
sections:
- overview
- jenkinsfile

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Pipeline
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,148 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= The Jenkinsfile
== Audience and Purpose
This document is intended for Jenkins users who want to leverage the power of
pipeline functionality. Extending the reach of what was learned from a "Hello
World" example in link:/doc/pipeline/[Getting Started with Pipeline], this
document explains how to use a `Jenkinsfile` to perform a simple checkout and
build for the contents of a repository.
== Creating a Jenkinsfile
A `Jenkinsfile` is a container for your pipeline (or other) script, which details
what specific steps are needed to perform a job for which you want to use
Jenkins. You create a `Jenkinsfile` with your preferred Groovy editor, or through
the configuration page on the web interface of your Jenkins instance.
Using a Groovy editor to code a `Jenkinsfile` gives you more flexibility for
building complex single or multibranch pipelines, but whether you use an editor
or the Jenkins interface does not matter if what you want to do is get familiar
with basic `Jenkinsfile` content.
. Open your Jenkins instance or Groovy editor.
. Navigate to the directory you want (it should be the root directory for your project).
. Use standard Jenkins syntax.
. Save your file.
The following example shows a basic `Jenkinsfile` made to build and test code for
a Maven project. `node` is the step that schedules tasks in the following block
to run on the machine (usually an agent) that matches the label specified in the
step argument (in this case, a node called "linux"). Code between the braces (
`{` and `}` ) is the body of the `node` step. The `checkout scm` command
indicates that this `Jenkinsfile` was created with an eye toward multibranch
support:
[source,groovy]
----
node ('linux'){
stage 'Build and Test'
env.PATH = "${tool 'Maven 3'}/bin:${env.PATH}"
checkout scm
sh 'mvn clean package'
}
----
In single-branch contexts, you could replace .checkout scm. with a source code
checkout step that calls a particular repository, such as:
[source,groovy]
----
git url: "https://github.com/my-organization/simple-maven-project-with-tests.git"
----
== Making Pull Requests
A pull request notifies the person responsible for maintaining a Jenkins
repository that you have a change or change set that you want to see merged into
the main branch associated with that repository. Each individual change is
called a "commit."
You make pull requests from a command line, or by selecting the appropriately
labeled button (typically "Pull" or "Create Pull Request") in the interface for
your source code management system.
A pull request to a repository included in or monitored by an Organization
Folder can be used to automatically execute a multibranch pipeline build.
== Using Organization Folders
Organization folders enable Jenkins to automatically detect and include any new
repositories within them as resources.
When you create a new repository (as might be the case for a new project), that
repository has a `Jenkinsfile`. If you also configure one or more organization
folders, Jenkins automatically detects any repository in an organization folder,
scans the contents of that repository at either default or configurable
intervals, and creates a Multibranch Pipeline project for what it finds in the
scan. An organization folder functions as a "parent," and any item within it is
treated as a "child" of that parent.
Organization folders alleviate the need to manually create projects for new
repositories. When you use organization folders, Jenkins views your repositories
as a hierarchy, and each repository (organization folder) may optionally have
child elements such as branches or pull requests.
To create Organization folders:
. Open Jenkins in your web browser.
. Go to: New Item → GitHub Organization or New Item → Bitbucket Team.
. Follow the configuration steps, making sure to specify appropriate scan
credentials and a specific owner for the GitHub Organization or Bitbucket Team
name.
. Set build triggers by selecting the checkbox associated with the trigger type
you want. Folder scans and the pipeline builds associated with those scans can
be initiated by command scripts or performed at defined intervals. They can also
triggered by project promotion or changes to the images in a monitored Docker
hub.
. Decide whether to automatically remove or retain unused items. "Orphaned Item
Strategy" fields in the configuration interface let you specify how many days to
keep old items, and how many old items to keep. If you enter no values in these
fields, unused items are removed by default.
While configuring organization folders, you can set the following options:
* Repository name pattern - a regular expression to specify which repositories are included in scans
* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise
* Checkout credentials - alternate credentials to use when checking out (cloning) code
Multibranch Pipeline projects and Organization Folders are examples of
"computed folder" functionality. In Multibranch Pipeline projects, computation
creates child items for eligible branches. In Organization folders, computation
populates child items as individual Multibranch Pipelines for scanned
repositories.
Select the "Folder Computation" section of your Jenkins interface to see the
duration (in seconds) and result (success or failure) of computation operations,
or to access a Folder Computation Log that provides more detail about this
activity.
== Basic Checkout and Build
Checkout and build command examples are shown in the code example used by the
introduction above. Examples shown assume that Jenkins is running on Linux or
another Unix-like operating system.
If your Jenkins server or agent is running on Windows, you are less likely to be
using the Bourne shell (`sh`) or
link:http://www.computerhope.com/unix/ubash.htm[Bourne-Again shell] (`bash`) as
a command language interpreter for starting software builds. In Windows
environments, use `bat` in place of `sh`, and backslashes (`\`) rather than
slashes as file separators in pathnames.

View File

@ -0,0 +1,620 @@
---
layout: section
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Overview
== Audience and Purpose
This document is intended for novice users of the Jenkins pipeline feature. The
document explains what a pipeline is, why that matters, and how to create the
different kinds of pipelines.
== Why Pipeline?
While standard Jenkins "freestyle" jobs support simple continuous integration by
allowing you to define sequential tasks in an application lifecycle, they do not
create a record of execution that persists through any planned or unplanned
restarts, enable one script to address all the steps in a complex workflow, or
confer the other advantages of pipelines.
In contrast to freestyle jobs, pipelines enable you to define the whole
application lifecycle. Pipeline functionality helps Jenkins to support
continuous delivery (CD). The Pipeline plugin was built with requirements for a
flexible, extensible, and script-based CD workflow capability in mind.
Accordingly, pipeline functionality is:
* Durable: Pipelines can survive both planned and unplanned restarts of your Jenkins master.
* Pausable: Pipelines can optionally stop and wait for human input or approval before completing the jobs for which they were built.
* Versatile: Pipelines support complex real-world CD requirements, including the ability to fork or join, loop, and work in parallel with each other.
* Extensible: The Pipeline plugin supports custom extensions to its DSL (domain scripting language) and multiple options for integration with other plugins.
The flowchart below is an example of one continuous delivery scenario enabled by the Pipeline plugin:
image::/images/pipeline/realworld-pipeline-flow.png[title="Pipeline Flow", 800]
== Pipeline Defined
Pipelines are Jenkins jobs enabled by the Pipeline (formerly called "workflow")
plugin and built with simple text scripts that use a Pipeline DSL
(domain-specific language) based on the Groovy programming language.
Pipelines leverage the power of multiple steps to execute both simple and
complex tasks according to parameters that you establish. Once created,
pipelines can build code and orchestrate the work required to drive applications
from commit to delivery.
== Pipeline Vocabulary
Pipeline terms such as "step," "node," and "stage" are a subset of the vocabulary used for Jenkins in general.
Step::
A "step" (often called a "build step") is a single task that is part of sequence. Steps tell Jenkins what to do.
Node::
In pipeline coding contexts, a "node" is a step that does two things, typically by enlisting help from available executors on agents:
* Schedules the steps contained within it to run by adding them to the Jenkins build queue (so that as soon as an executor slot is free on a node, the appropriate steps run).
* Creates a workspace, meaning a file directory specific to a particular job, where resource-intensive processing can occur without negatively impacting your pipeline performance. Workspaces created by node are automatically removed after all the steps contained inside the node declaration finish executing.
It is a best practice to do all material work, such as building or running shell scripts, within nodes, because node blocks in a stage tell Jenkins that the steps within them are resource-intensive enough to be scheduled, request help from the agent pool, and lock a workspace only as long as they need it.
In Jenkins generally, "node" also means any computer that is part of your Jenkins installation, whether that computer is used as a master or as an agent.
Stage::
A "stage" is a logically distinct part of the execution of any task, with parameters for locking, ordering, and labeling its part of a process relative to other parts of the same process. Pipeline syntax is often comprised of stages. Each stage step can have one or more build steps within it.
It is a best practice to work within stages because they help with organization by lending logical divisions to a pipelines, and because the
Jenkins Pipeline visualization feature displays stages as unique segments of the pipeline.
Familiarity with Jenkins terms such as "master," "agent," and "executor" also helps with understanding how pipelines work. These terms are not specific to pipelines:
* master - A "master" is the computer where the Jenkins server is installed and
running; it handles tasks for your build system. Pipeline scripts are parsed
on masters, where Groovy code runs and node blocks allocate executors and
workspaces for use by any nested steps (such as `sh`) that might request one or both.
* agent - An "agent" (formerly "slave") is a computer set up to offload
available projects from the master. Your configuration determines the number
and scope of operations that an agent can perform. Operations are performed by
executors.
* executor - An "executor" is a computational resource for running builds or
Pipeline steps. It can run on master or agent machines, either by itself or in
parallel with other executors.
== Preparing Jenkins to Run Pipelines
To run pipelines, you need to have a Jenkins instance that is set up with the
appropriate plugins. This requires:
* Jenkins 1.642.3 or later (Jenkins 2 is recommended)
* The Pipeline plugin
=== Installing the Pipeline Plugin
The Pipeline plugin is installed in the same way as other Jenkins plugins.
Installing the Pipeline plugin also installs the suite of related plugins on
which it depends:
. Open Jenkins in your web browser.
. On the Manage Jenkins page for your installation, navigate to *Manage Plugins*.
. Find https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin[Pipeline] from among the plugins listed on the Available tab (You can do this by scrolling through the plugin list or by using "Pipeline" as a term to filter results).
. Select the checkbox for Pipeline plugin.
. Select either *Install without restart* or *Download now and install after restart*.
. Restart Jenkins.
=== Pipeline Plugin Context
The Pipeline plugin works with a suite of related plugins that enhance the
pipeline functionality of your Jenkins setup. The related plugins typically
introduce additional pipeline syntax or visualizations.
For example, the table below, while not comprehensive, describes a few
pipeline-related plugins in terms of their importance to pipeline functionality
(required, recommended, or optional).
To get the basic pipeline functionality, you only need to install the main
Pipeline plugin, but recommended plugins add additional capabilities that you
will probably want. For example, it is a best practice to develop pipelines as code by storing a `Jenkinsfile` with pipeline script in your SCM,
so that you can apply the same version control and testing to pipelines as you do to your other software, and that is why the
Multibranch Pipeline plugin is recommended.
Optional plugins are mainly useful if you are creating pipelines that are
related to the technologies that they support.
[options="header"]
|=======================
|Plugin Name |Description |Status
|Pipeline (workflow-aggregator) | Installs the core pipeline engine and its dependent plugins:
Pipeline: API,
Pipeline: Basic Steps,
Pipeline: Durable Task Step,
Pipeline: Execution Support,
Pipeline: Global Shared Library for CPS pipeline,
Pipeline: Groovy CPS Execution,
Pipeline: Job,
Pipeline: SCM Step,
Pipeline: Step API
| required
| Pipeline: Stage View
| Provides a graphical swimlane view of pipeline stage execution, as well as a build history of the stages
| recommended
| Multibranch Pipeline
| Adds "Multibranch Pipeline" item type which enables Jenkins to automatically
build branches that contain a `Jenkinsfile`
| recommended
| GitHub Branch Source
| Adds GitHub Organization Folder item type and adds "GitHub" as a branch source on Multibranch pipelines
| recommended for teams hosting repositories in GitHub
| Bitbucket Branch Source
| Adds Bitbucket Team item type and adds "Bitbucket" as a branch source on Multibranch pipelines
| recommended for teams hosting repositories in Bitbucket; best with Bitbucket Server 4.0 or later.
| Docker Pipeline
| Enables pipeline to build and use Docker containers inside pipeline scripts.
| optional
|=======================
=== More Information
As with any Jenkins plugin, you can install the Pipeline plugin using the Plugin
Manager in a running Jenkins instance.
To explore Pipeline without installing
Jenkins separately or accessing your production system, you can run a
link:https://github.com/jenkinsci/workflow-aggregator-plugin/blob/master/demo/README.md[Docker
demo] of Pipeline functionality.
Pipeline-related plugins are regularly "whitelisted" as compatible with or
designed for Pipeline usage. For more information, see the
link:https://github.com/jenkinsci/pipeline-plugin/blob/master/COMPATIBILITY.md[Plugin
Compatibility With Pipeline] web page.
When you get flows from source control through `Jenkinsfile` or a link:https://github.com/jenkinsci/workflow-cps-global-lib-plugin/blob/master/README.md[Pipeline Global Library],
you may also have to whitelist method calls in the link:https://wiki.jenkins-ci.org/display/JENKINS/Script+Security+Plugin[Script Security Plugin].
[NOTE]
====
Several plugins available in the Jenkins ecosystem but not actually
related to the Pipeline feature set described in this guide also use the terms
"pipeline," "DSL," and "Job DSL" in their names. For example:
* Build Pipeline plugin - provides a way to execute Jenkins jobs sequentially
* Build Flow Plugin - introduces a job type that lets you define an orchestration process as a script.
This guide describes the link:https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin[Pipeline Plugin] that supports the current Pipeline feature set.
====
== Approaches to Defining Pipeline Script
You can create pipelines in either of the following ways:
* Through script entered in the configuration page of the web interface for your Jenkins instance.
* Through a `Jenkinsfile` that you create with a text editor and then check into your project's source control repository, where it can be accessed when you select the *Pipeline Script from SCM* option while configuring the Pipeline in Jenkins.
[NOTE]
====
When you use a Jenkinsfile, it is a best practice to put #!groovy at the top of the file so that IDEs and
GitHub diffs detect the Groovy language properly.
====
== Creating a Simple Pipeline
Initial pipeline usage typically involves the following tasks:
. Downloading and installing the Pipeline plugin (Unless it is already part of your Jenkins installation)
. Creating a Pipeline of a specific type
. Configuring your Pipeline
. Controlling flow (workflow) through your Pipeline
. Scaling your Pipeline
To create a simple pipeline from the Jenkins interface, perform the following steps:
. Click *New Item* on your Jenkins home page, enter a name for your (pipeline) job, select *Pipeline*, and click *OK*.
. In the Script text area of the configuration screen, enter your pipeline script. If you are new to pipeline creation, you might want to start by opening Snippet Generator and selecting the "Hello Word" snippet.
. Check the Use Groovy Sandbox option below the Script text area.
. Click *Save*.
. Click *Build Now* to create the pipeline.
. Click ▾ and select *Console Output* to see the output.
Pipelines are written as Groovy scripts that tell Jenkins what to do when they
are run. Relevant bits of syntax are introduced as needed, so while an
understanding of Groovy is helpful, it is not required to use Pipeline.
If you are a Jenkins administrator (in other words, authorized to approve your
own scripts), sandboxing is optional but efficient, because it lets scripts run
without approval as long as they limit themselves to operations that Jenkins
considers inherently safe.
[NOTE]
====
To use pathnames that include spaces, bracket those pathnames between escaped double quotes using \".
The extra quotation marks ensure that any spaces in pathnames are parsed properly.
====
The following example shows a successful build of a pipeline created with a
one-line script that uses the `echo` step to output the phrase, "Hello from
Pipeline":
[source,groovy]
----
node {
echo 'Hello from Pipeline'
}
----
----
Started by user anonymous
[Pipeline] echo
Hello from Pipeline
[Pipeline] End of Pipeline
Finished: SUCCESS
----
[NOTE]
====
You can also create complex and multibranch pipelines in the script entry
area of the Jenkins configuration page, but because they contain multiple stages
and the configuration page UI provides limited scripting space, pipeline
creation is more commonly done using an editor of your choice from which scripts
can be loaded into Jenkins using the *Pipeline script from SCM* option.
====
It is a best practice to use parallel steps whenever you can, as long as you remember not to attempt so much parallel processing
that it swamps the number of available executors. For example, you can acquire a node within the parallel branches of your pipeline:
[source,groovy]
----
parallel 'integration-tests':{
node('mvn-3.3'){}
}, 'functional-tests':{
node('selenium'){}
}
----
== Creating Multibranch Pipelines
The *Multibranch Pipeline* project type enables you to configure different jobs
for different branches of the same project. In a multibranch pipeline
configuration, Jenkins automatically discovers, manages, and executes jobs
for multiple source repositories and branches. This eliminates the need for
manual job creation and management, as would otherwise be necessary
when, for example, a developer adds a new feature to an existing
product.
A multibranch pipeline project always includes a 'Jenkinsfile' in its
repository root. Jenkins automatically creates a sub-project for each branch
that it finds in a repository with a `Jenkinsfile`.
Multibranch pipelines use the same version control as the rest of your software
development process. This "pipeline as code" approach has the following
advantages:
* You can modify pipeline code without special editing permissions.
* Finding out who changed what and why no longer depends on whether developers remember to comment their code changes in configuration files.
* Version control makes the history of changes to code readily apparent.
To create a Multibranch Pipeline:
. Click New Item on your Jenkins home page, enter a name for your job, select Multibranch Pipeline, and click OK.
. Configure your SCM source (options include Git, GitHub, Mercurial, Subversion, and Bitbucket), supplying information about the owner, scan credentials, and repository in appropriate fields.
For example, if you select Git as the branch source, you are prompted for the usual connection information, but then rather than enter a fixed refspec (Git's name for a source/destination pair), you would enter a branch name pattern (Use default settings to look for any branch).
. Configure the other multibranch pipeline options:
* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise
* Checkout credentials - alternate credentials to use when checking out the code (cloning)
* Include branches - a regular expression to specify branches to include
* Exclude branches - a regular expression to specify branches to exclude; note that this will takes precedence over the contents of include expressions
. Save your configuration.
Jenkins automatically scans the designated repository and creates appropriate branches.
For example (again in Git), if you started with a master branch, and then wanted
to experiment with some changes, and so did `git checkout -b newfeature` and
pushed some commits, Jenkins would automatically detect the new branch in your
repository and create a new sub-project for it. That sub-project would have its
own build history unrelated to the trunk (main line).
If you choose, you can ask for the sub-project to be automatically removed after
its branch is merged with the main line and deleted. To change your Pipeline
script—for example, to add a new Jenkins publisher step corresponding to new
reports that your `Makefile`/`pom.xml`/etc. is creating—you edit the appropriate
`Jenkinsfile`. Your Pipeline script is always synchronized with
the rest of the source code you are working on.
*Multibranch Pipeline* projects expose the name of the branch being built with
the `BRANCH_NAME` environment variable. In multibranch pipelines, the `checkout
scm` step checks out the specific commit that the `Jenkinsfile` originated, so
as to maintain branch integrity.
== Loading Pipeline Scripts from SCM
Complex pipelines would be cumbersome to write and maintain if you could only do
that in the text area provided by the Jenkins job configuration page.
Accordingly, you also have the option of writing pipeline scripts (Jenkinsfiles)
with the editor that you use in your IDE (integrated development environment) or
SCM system, and then loading those scripts into Jenkins using the *Pipeline
Script from SCM* option enabled by the workflow-scm-step plugin, which is one of
the plugins that the Pipeline plugin depends on and automatically installs.
Loading pipeline scripts using the `checkout scm` step leverages the
idea of "pipeline as code," and lets you maintain pipelines using version
control and standalone Groovy editors.
To do this, select *Pipeline script from SCM* when defining the pipeline.
With the *Pipeline script from SCM* option selected, you do not enter any Groovy
code in the Jenkins UI; you just indicate by specifying a path where in source
code you want to retrieve the pipeline from. When you update the designated
repository, a new build is triggered, as long as your job is configured with an
SCM polling trigger.
== Writing Pipeline Scripts in the Jenkins UI
Because Pipelines are comprised of text scripts, they can be written (edited) in
the same script creation area of the Jenkins user interface where you create
them:
image::/images/pipeline/pipeline-editor.png[title="Pipeline Editor", 800]
NOTE: You determine which kind of pipeline you want to set up before writing it.
=== Using Snippet Generator
You can use the Snippet Generator tool to create syntax examples for individual
steps with which you might not be familiar, or to add relevant syntax to a step
with a long and complex configuration.
Snippet Generator is dynamically populated with a list of the steps available
for pipeline configuration. Depending on the plugins installed to your Jenkins
environment, you may see more or fewer items in the list exposed by Snippet
Generator.
To add one or more steps from Snippet Generator to your pipeline code:
. Open Snippet Generator
. Scroll to the step you want
. Click that step
. Configure the selected step, if presented with configuration options
. Click *Generate Groovy* to see a Groovy snippet that runs the step as configured
. Optionally select and configure additional steps
image::/images/pipeline/snippet-generator.png[title="Snippet Generator", 800]
When you click *Generate Groovy* after selecting a step, you see the function
name used for that step, the names of any parameters it takes (if they are not
default parameters), and the syntax used by Snippet Generator to create that
step.
You can copy and paste the generated code right into your Pipeline, or use it as
a starting point, perhaps deleting any optional parameters that you do not need.
To access information about steps marked with the help icon (question mark),
click on that icon.
== Basic Groovy Syntax for Pipeline Configuration
You typically add functionality to a new pipeline by performing the following tasks:
* Adding nodes
* Adding more complex logic (usually expressed as stages and steps)
To configure a pipeline you have created through the Jenkins UI, select the
pipeline and click *Configure*.
If you run Jenkins on Linux or another Unix-like operating system with a Git
repository that you want to test, for example, you can do that with syntax like
the following, substituting your own name for "joe-user":
[source, groovy]
----
node {
git url: 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn -B verify"
}
----
In Windows environments, use `bat` in place of `sh` and use backslashes as the
file separator where needed (backslashes need to be escaped inside strings).
For example, rather than:
[source, groovy]
----
sh "${mvnHome}/bin/mvn -B verify"
----
you would use:
[source, groovy]
----
bat "${mvnHome}\\bin\\mvn -B verify"
----
Your Groovy pipeline script can include functions, conditional tests, loops,
try/catch/finally blocks, and so on.
Sample syntax for one node in a Java environment that is using the open source
Maven build automation tool (note the definition for `mvnHome`) is shown below:
image::/images/pipeline/pipeline-sample.png[title="Pipeline Sample", 800]
Pipeline Sample (graphic) key:
* `def` is a keyword to define a function (you can also give a Java type in
place of `def` to make it look more like a Java method)
* `=~` is Groovy syntax to match text against a regular expression
* [0] looks up the first match
* [1] looks up the first (…) group within that match
* `readFile` step loads a text file from the workspace and returns its content
(Note: Do not use `java.io.File` methods, these refer to files on the master
where Jenkins is running, not files in the current workspace).
* The `writeFile` step saves content to a text file in the workspace
* The `fileExists` step checks whether a file exists without loading it.
The tool step makes sure a tool with the given name is installed on the current
node. The script needs to know where it was installed, so the tool can be run
later. For this, you need a variable.
The `def` keyword in Groovy is the quickest way to define a new variable (with no specific type).
In the sample syntax discussed above, a variable is defined by the following expression:
[source, groovy]
----
def mvnHome = tool 'M3'
----
This ensures that 'M3' is installed somewhere accessible to Jenkins and assigns
the return value of the step (an installation path) to the `mvnHome` variable.
== Advanced Groovy Syntax for Pipeline Configuration
Groovy lets you omit parentheses around function arguments. The named-parameter
syntax is also a shorthand for creating a map, which in Groovy uses the syntax
`[key1: value1, key2: value2]`, so you could write:
[source, groovy]
----
git([url: 'https://github.com/joe_user/simple-maven-project-with-tests.git', branch: 'master'])
----
For convenience, when calling steps taking only one parameter (or only one
mandatory parameter), you can omit the parameter name. For example:
[source, groovy]
----
sh 'echo hello'
----
is really shorthand for:
[source, groovy]
----
sh([script: 'echo hello'])
----
=== Managing the Environment
One way to use tools by default is to add them to your executable path using the
special variable `env` that is defined for all pipelines:
[source, groovy]
----
node {
git url: 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
env.PATH = "${mvnHome}/bin:${env.PATH}"
sh 'mvn -B verify'
}
----
* Properties of this variable are environment variables on the current node.
* You can override certain environment variables, and the overrides are seen by
subsequent `sh` steps (or anything else that pays attention to environment variables).
* You can run `mvn` without a fully-qualified path.
Setting a variable such as `PATH` in this way is only safe if you are using a
single agent for this build. Alternatively, you can use the `withEnv` step to
set a variable within a scope:
[source, groovy]
----
node {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
withEnv(["PATH+MAVEN=${tool 'M3'}/bin"]) {
sh 'mvn -B verify'
}
}
----
Jenkins defines some environment variables by default:
*Example:* `env.BUILD_TAG` can be used to get a tag like 'jenkins-projname-1' from
Groovy code, or `$BUILD_TAG` can be used from a `sh` script. The Snippet Generator
help for the `withEnv` step has more detail on this topic.
=== Build Parameters
If you configured your pipeline to accept parameters using the *Build with
Parameters* option, those parameters are accessible as Groovy variables of the
same name.
=== Types of Executors
Every Pipeline build runs on a Jenkins master using a *flyweight executor*,
which is an uncounted (because it's a temporary rather than configured) slot. Flyweight executors
require very little computing power. A flyweight executor (sometimes also called
a flyweight task) represents Groovy script, which is idle as it waits for a step to complete.
To highlight the contrast between executor types, some Jenkins documentation calls any regular executor a *heavyweight executor*.
When you run a `node` step, an executor is allocated on a node, which is usually an agent, as soon as
an appropriate node is available.
It is a best practice to avoid placing `input` within a node. The input element pauses pipeline execution to wait for either automatic or manual approval.
By design and by nature, approval can take some time, so placing `input` within a node wastes resources by tying up both the flyweight executor
used for input and the regular executor used by the node block, which will not be free for other tasks until input is complete.
Although any flyweight executor running a pipeline is hidden when the pipeline script is idle (between tasks), the *Build Executor Status* widget on the Jenkins page displays status for both types of executors. If the
one available executor on an agent has been pressed into service by a pipeline build that is paused and
you start a second build of the same pipeline, both builds are shown running on the master, but the
second build displays in the Build Queue until the initial build completes and executors are free to help with further processing.
When you use inputs, it is a best practice to wrap them in timeouts. Wrapping inputs in timeouts allows them to be cleaned up if
approvals do not occur within a given window. For example:
[source, groovy]
----
timeout(time:5, unit:'DAYS') {
input message:'Approve deployment?', submitter: 'it-ops'
}
----
=== Recording Test Results and Artifacts
If there are any test failures in a given build, you want Jenkins to record
them and then proceed, rather than stopping. If you want it saved, you must
capture the `.jar` that you built. The following sample code for a node shows how
(As previously seen in examples from this guide, Maven is being used as
a build tool):
[source, groovy]
----
node {
git 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn -B -Dmaven.test.failure.ignore verify"
archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true
junit '**/target/surefire-reports/TEST-*.xml'
}
----
(Older versions of Pipeline require a slightly more verbose syntax.
The “snippet generator” can be used to see the exact format.)
* If tests fail, the Pipeline is marked unstable (as denoted by a yellow ball in
the Jenkins UI), and you can browse "Test Result Trend" to see the relevant history.
* You should see Last Successful Artifacts on the Pipeline index page.

View File

@ -0,0 +1,3 @@
---
chapter: 9
sections:

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Plugin Development
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,3 @@
---
chapter: 8
sections:

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Scaling Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -0,0 +1,4 @@
---
layout: refresh
refresh_to_post_id: '/doc/book/operating-jenkins/security'
---

View File

@ -0,0 +1,3 @@
---
chapter: 2
sections:

View File

@ -0,0 +1,16 @@
---
layout: chapter
---
:notitle:
:description:
:author:
:email: jenkinsci-users@googlegroups.com
:sectanchors:
:toc: left
= Using Jenkins
[NOTE]
====
This is still very much a work in progress
====

View File

@ -4,8 +4,6 @@ title: Jenkins Documentation
section: doc
---
- book_dir = File.expand_path(File.dirname(__FILE__) + '/book')
%p
Jenkins is an automation engine with an unparalleled plugin ecosystem to
support all of your favorite tools in your delivery pipelines, whether your
@ -14,21 +12,21 @@ section: doc
%div
%h3
%a{ name: "pipeline", title: "Pipeline" }
%a{:name => 'pipeline', :href => '/doc/book/pipeline'}
Pipeline
%ul
%li
%a{:href => 'pipeline'}
%a{:href => '/doc/book/pipeline/overview'}
Getting Started with Pipeline
%li
%a{:href => 'pipeline/jenkinsfile'}
%a{:href => '/doc/book/pipeline/jenkinsfile'}
Getting Started with the Jenkinsfile
%li
%a{:href => 'pipeline/steps'}
%a{:href => '/doc/pipeline/steps'}
Pipeline Steps Reference
%li
%a{:href => 'pipeline/examples'}
%a{:href => '/doc/pipeline/examples'}
Pipeline Examples
%hr/
@ -45,14 +43,12 @@ section: doc
%h4
Chapters
%div
%ul
- sections_from(book_dir).each do |section, file, url, children|
/
This section provided by the following document:
= File.basename(file)
%ol
- site.handbook.chapters.keys.sort.each do |c|
- chapter = site.handbook.chapters[c]
%li
%a{:href => url}
= section
%a{:href => File.join('book', chapter.key)}
= chapter.title
%hr/

View File

@ -1,615 +1,4 @@
---
layout: simplepage
title: "Getting Started with Pipeline"
layout: refresh
refresh_to_post_id: '/doc/book/pipeline'
---
:toc:
== Audience and Purpose
This document is intended for novice users of the Jenkins pipeline feature. The
document explains what a pipeline is, why that matters, and how to create the
different kinds of pipelines.
== Why Pipeline?
While standard Jenkins "freestyle" jobs support simple continuous integration by
allowing you to define sequential tasks in an application lifecycle, they do not
create a record of execution that persists through any planned or unplanned
restarts, enable one script to address all the steps in a complex workflow, or
confer the other advantages of pipelines.
In contrast to freestyle jobs, pipelines enable you to define the whole
application lifecycle. Pipeline functionality helps Jenkins to support
continuous delivery (CD). The Pipeline plugin was built with requirements for a
flexible, extensible, and script-based CD workflow capability in mind.
Accordingly, pipeline functionality is:
* Durable: Pipelines can survive both planned and unplanned restarts of your Jenkins master.
* Pausable: Pipelines can optionally stop and wait for human input or approval before completing the jobs for which they were built.
* Versatile: Pipelines support complex real-world CD requirements, including the ability to fork or join, loop, and work in parallel with each other.
* Extensible: The Pipeline plugin supports custom extensions to its DSL (domain scripting language) and multiple options for integration with other plugins.
The flowchart below is an example of one continuous delivery scenario enabled by the Pipeline plugin:
image::/images/pipeline/realworld-pipeline-flow.png[title="Pipeline Flow", 800]
== Pipeline Defined
Pipelines are Jenkins jobs enabled by the Pipeline (formerly called "workflow")
plugin and built with simple text scripts that use a Pipeline DSL
(domain-specific language) based on the Groovy programming language.
Pipelines leverage the power of multiple steps to execute both simple and
complex tasks according to parameters that you establish. Once created,
pipelines can build code and orchestrate the work required to drive applications
from commit to delivery.
== Pipeline Vocabulary
Pipeline terms such as "step," "node," and "stage" are a subset of the vocabulary used for Jenkins in general.
Step::
A "step" (often called a "build step") is a single task that is part of sequence. Steps tell Jenkins what to do.
Node::
In pipeline coding contexts, a "node" is a step that does two things, typically by enlisting help from available executors on agents:
* Schedules the steps contained within it to run by adding them to the Jenkins build queue (so that as soon as an executor slot is free on a node, the appropriate steps run).
* Creates a workspace, meaning a file directory specific to a particular job, where resource-intensive processing can occur without negatively impacting your pipeline performance. Workspaces created by node are automatically removed after all the steps contained inside the node declaration finish executing.
It is a best practice to do all material work, such as building or running shell scripts, within nodes, because node blocks in a stage tell Jenkins that the steps within them are resource-intensive enough to be scheduled, request help from the agent pool, and lock a workspace only as long as they need it.
In Jenkins generally, "node" also means any computer that is part of your Jenkins installation, whether that computer is used as a master or as an agent.
Stage::
A "stage" is a logically distinct part of the execution of any task, with parameters for locking, ordering, and labeling its part of a process relative to other parts of the same process. Pipeline syntax is often comprised of stages. Each stage step can have one or more build steps within it.
It is a best practice to work within stages because they help with organization by lending logical divisions to a pipelines, and because the
Jenkins Pipeline visualization feature displays stages as unique segments of the pipeline.
Familiarity with Jenkins terms such as "master," "agent," and "executor" also helps with understanding how pipelines work. These terms are not specific to pipelines:
* master - A "master" is the computer where the Jenkins server is installed and
running; it handles tasks for your build system. Pipeline scripts are parsed
on masters, where Groovy code runs and node blocks allocate executors and
workspaces for use by any nested steps (such as `sh`) that might request one or both.
* agent - An "agent" (formerly "slave") is a computer set up to offload
available projects from the master. Your configuration determines the number
and scope of operations that an agent can perform. Operations are performed by
executors.
* executor - An "executor" is a computational resource for running builds or
Pipeline steps. It can run on master or agent machines, either by itself or in
parallel with other executors.
== Preparing Jenkins to Run Pipelines
To run pipelines, you need to have a Jenkins instance that is set up with the
appropriate plugins. This requires:
* Jenkins 1.642.3 or later (Jenkins 2 is recommended)
* The Pipeline plugin
=== Installing the Pipeline Plugin
The Pipeline plugin is installed in the same way as other Jenkins plugins.
Installing the Pipeline plugin also installs the suite of related plugins on
which it depends:
. Open Jenkins in your web browser.
. On the Manage Jenkins page for your installation, navigate to *Manage Plugins*.
. Find https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin[Pipeline] from among the plugins listed on the Available tab (You can do this by scrolling through the plugin list or by using "Pipeline" as a term to filter results).
. Select the checkbox for Pipeline plugin.
. Select either *Install without restart* or *Download now and install after restart*.
. Restart Jenkins.
=== Pipeline Plugin Context
The Pipeline plugin works with a suite of related plugins that enhance the
pipeline functionality of your Jenkins setup. The related plugins typically
introduce additional pipeline syntax or visualizations.
For example, the table below, while not comprehensive, describes a few
pipeline-related plugins in terms of their importance to pipeline functionality
(required, recommended, or optional).
To get the basic pipeline functionality, you only need to install the main
Pipeline plugin, but recommended plugins add additional capabilities that you
will probably want. For example, it is a best practice to develop pipelines as code by storing a `Jenkinsfile` with pipeline script in your SCM,
so that you can apply the same version control and testing to pipelines as you do to your other software, and that is why the
Multibranch Pipeline plugin is recommended.
Optional plugins are mainly useful if you are creating pipelines that are
related to the technologies that they support.
[options="header"]
|=======================
|Plugin Name |Description |Status
|Pipeline (workflow-aggregator) | Installs the core pipeline engine and its dependent plugins:
Pipeline: API,
Pipeline: Basic Steps,
Pipeline: Durable Task Step,
Pipeline: Execution Support,
Pipeline: Global Shared Library for CPS pipeline,
Pipeline: Groovy CPS Execution,
Pipeline: Job,
Pipeline: SCM Step,
Pipeline: Step API
| required
| Pipeline: Stage View
| Provides a graphical swimlane view of pipeline stage execution, as well as a build history of the stages
| recommended
| Multibranch Pipeline
| Adds "Multibranch Pipeline" item type which enables Jenkins to automatically
build branches that contain a `Jenkinsfile`
| recommended
| GitHub Branch Source
| Adds GitHub Organization Folder item type and adds "GitHub" as a branch source on Multibranch pipelines
| recommended for teams hosting repositories in GitHub
| Bitbucket Branch Source
| Adds Bitbucket Team item type and adds "Bitbucket" as a branch source on Multibranch pipelines
| recommended for teams hosting repositories in Bitbucket; best with Bitbucket Server 4.0 or later.
| Docker Pipeline
| Enables pipeline to build and use Docker containers inside pipeline scripts.
| optional
|=======================
=== More Information
As with any Jenkins plugin, you can install the Pipeline plugin using the Plugin
Manager in a running Jenkins instance.
To explore Pipeline without installing
Jenkins separately or accessing your production system, you can run a
link:https://github.com/jenkinsci/workflow-aggregator-plugin/blob/master/demo/README.md[Docker
demo] of Pipeline functionality.
Pipeline-related plugins are regularly "whitelisted" as compatible with or
designed for Pipeline usage. For more information, see the
link:https://github.com/jenkinsci/pipeline-plugin/blob/master/COMPATIBILITY.md[Plugin
Compatibility With Pipeline] web page.
When you get flows from source control through `Jenkinsfile` or a link:https://github.com/jenkinsci/workflow-cps-global-lib-plugin/blob/master/README.md[Pipeline Global Library],
you may also have to whitelist method calls in the link:https://wiki.jenkins-ci.org/display/JENKINS/Script+Security+Plugin[Script Security Plugin].
[NOTE]
====
Several plugins available in the Jenkins ecosystem but not actually
related to the Pipeline feature set described in this guide also use the terms
"pipeline," "DSL," and "Job DSL" in their names. For example:
* Build Pipeline plugin - provides a way to execute Jenkins jobs sequentially
* Build Flow Plugin - introduces a job type that lets you define an orchestration process as a script.
This guide describes the link:https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin[Pipeline Plugin] that supports the current Pipeline feature set.
====
== Approaches to Defining Pipeline Script
You can create pipelines in either of the following ways:
* Through script entered in the configuration page of the web interface for your Jenkins instance.
* Through a `Jenkinsfile` that you create with a text editor and then check into your project's source control repository, where it can be accessed when you select the *Pipeline Script from SCM* option while configuring the Pipeline in Jenkins.
[NOTE]
====
When you use a Jenkinsfile, it is a best practice to put #!groovy at the top of the file so that IDEs and
GitHub diffs detect the Groovy language properly.
====
== Creating a Simple Pipeline
Initial pipeline usage typically involves the following tasks:
. Downloading and installing the Pipeline plugin (Unless it is already part of your Jenkins installation)
. Creating a Pipeline of a specific type
. Configuring your Pipeline
. Controlling flow (workflow) through your Pipeline
. Scaling your Pipeline
To create a simple pipeline from the Jenkins interface, perform the following steps:
. Click *New Item* on your Jenkins home page, enter a name for your (pipeline) job, select *Pipeline*, and click *OK*.
. In the Script text area of the configuration screen, enter your pipeline script. If you are new to pipeline creation, you might want to start by opening Snippet Generator and selecting the "Hello Word" snippet.
. Check the Use Groovy Sandbox option below the Script text area.
. Click *Save*.
. Click *Build Now* to create the pipeline.
. Click ▾ and select *Console Output* to see the output.
Pipelines are written as Groovy scripts that tell Jenkins what to do when they
are run. Relevant bits of syntax are introduced as needed, so while an
understanding of Groovy is helpful, it is not required to use Pipeline.
If you are a Jenkins administrator (in other words, authorized to approve your
own scripts), sandboxing is optional but efficient, because it lets scripts run
without approval as long as they limit themselves to operations that Jenkins
considers inherently safe.
[NOTE]
====
To use pathnames that include spaces, bracket those pathnames between escaped double quotes using \".
The extra quotation marks ensure that any spaces in pathnames are parsed properly.
====
The following example shows a successful build of a pipeline created with a
one-line script that uses the `echo` step to output the phrase, "Hello from
Pipeline":
[source,groovy]
----
node {
echo 'Hello from Pipeline'
}
----
----
Started by user anonymous
[Pipeline] echo
Hello from Pipeline
[Pipeline] End of Pipeline
Finished: SUCCESS
----
[NOTE]
====
You can also create complex and multibranch pipelines in the script entry
area of the Jenkins configuration page, but because they contain multiple stages
and the configuration page UI provides limited scripting space, pipeline
creation is more commonly done using an editor of your choice from which scripts
can be loaded into Jenkins using the *Pipeline script from SCM* option.
====
It is a best practice to use parallel steps whenever you can, as long as you remember not to attempt so much parallel processing
that it swamps the number of available executors. For example, you can acquire a node within the parallel branches of your pipeline:
[source,groovy]
----
parallel 'integration-tests':{
node('mvn-3.3'){}
}, 'functional-tests':{
node('selenium'){}
}
----
== Creating Multibranch Pipelines
The *Multibranch Pipeline* project type enables you to configure different jobs
for different branches of the same project. In a multibranch pipeline
configuration, Jenkins automatically discovers, manages, and executes jobs
for multiple source repositories and branches. This eliminates the need for
manual job creation and management, as would otherwise be necessary
when, for example, a developer adds a new feature to an existing
product.
A multibranch pipeline project always includes a 'Jenkinsfile' in its
repository root. Jenkins automatically creates a sub-project for each branch
that it finds in a repository with a `Jenkinsfile`.
Multibranch pipelines use the same version control as the rest of your software
development process. This "pipeline as code" approach has the following
advantages:
* You can modify pipeline code without special editing permissions.
* Finding out who changed what and why no longer depends on whether developers remember to comment their code changes in configuration files.
* Version control makes the history of changes to code readily apparent.
To create a Multibranch Pipeline:
. Click New Item on your Jenkins home page, enter a name for your job, select Multibranch Pipeline, and click OK.
. Configure your SCM source (options include Git, GitHub, Mercurial, Subversion, and Bitbucket), supplying information about the owner, scan credentials, and repository in appropriate fields.
For example, if you select Git as the branch source, you are prompted for the usual connection information, but then rather than enter a fixed refspec (Git's name for a source/destination pair), you would enter a branch name pattern (Use default settings to look for any branch).
. Configure the other multibranch pipeline options:
* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise
* Checkout credentials - alternate credentials to use when checking out the code (cloning)
* Include branches - a regular expression to specify branches to include
* Exclude branches - a regular expression to specify branches to exclude; note that this will takes precedence over the contents of include expressions
. Save your configuration.
Jenkins automatically scans the designated repository and creates appropriate branches.
For example (again in Git), if you started with a master branch, and then wanted
to experiment with some changes, and so did `git checkout -b newfeature` and
pushed some commits, Jenkins would automatically detect the new branch in your
repository and create a new sub-project for it. That sub-project would have its
own build history unrelated to the trunk (main line).
If you choose, you can ask for the sub-project to be automatically removed after
its branch is merged with the main line and deleted. To change your Pipeline
script—for example, to add a new Jenkins publisher step corresponding to new
reports that your `Makefile`/`pom.xml`/etc. is creating—you edit the appropriate
`Jenkinsfile`. Your Pipeline script is always synchronized with
the rest of the source code you are working on.
*Multibranch Pipeline* projects expose the name of the branch being built with
the `BRANCH_NAME` environment variable. In multibranch pipelines, the `checkout
scm` step checks out the specific commit that the `Jenkinsfile` originated, so
as to maintain branch integrity.
== Loading Pipeline Scripts from SCM
Complex pipelines would be cumbersome to write and maintain if you could only do
that in the text area provided by the Jenkins job configuration page.
Accordingly, you also have the option of writing pipeline scripts (Jenkinsfiles)
with the editor that you use in your IDE (integrated development environment) or
SCM system, and then loading those scripts into Jenkins using the *Pipeline
Script from SCM* option enabled by the workflow-scm-step plugin, which is one of
the plugins that the Pipeline plugin depends on and automatically installs.
Loading pipeline scripts using the `checkout scm` step leverages the
idea of "pipeline as code," and lets you maintain pipelines using version
control and standalone Groovy editors.
To do this, select *Pipeline script from SCM* when defining the pipeline.
With the *Pipeline script from SCM* option selected, you do not enter any Groovy
code in the Jenkins UI; you just indicate by specifying a path where in source
code you want to retrieve the pipeline from. When you update the designated
repository, a new build is triggered, as long as your job is configured with an
SCM polling trigger.
== Writing Pipeline Scripts in the Jenkins UI
Because Pipelines are comprised of text scripts, they can be written (edited) in
the same script creation area of the Jenkins user interface where you create
them:
image::/images/pipeline/pipeline-editor.png[title="Pipeline Editor", 800]
NOTE: You determine which kind of pipeline you want to set up before writing it.
=== Using Snippet Generator
You can use the Snippet Generator tool to create syntax examples for individual
steps with which you might not be familiar, or to add relevant syntax to a step
with a long and complex configuration.
Snippet Generator is dynamically populated with a list of the steps available
for pipeline configuration. Depending on the plugins installed to your Jenkins
environment, you may see more or fewer items in the list exposed by Snippet
Generator.
To add one or more steps from Snippet Generator to your pipeline code:
. Open Snippet Generator
. Scroll to the step you want
. Click that step
. Configure the selected step, if presented with configuration options
. Click *Generate Groovy* to see a Groovy snippet that runs the step as configured
. Optionally select and configure additional steps
image::/images/pipeline/snippet-generator.png[title="Snippet Generator", 800]
When you click *Generate Groovy* after selecting a step, you see the function
name used for that step, the names of any parameters it takes (if they are not
default parameters), and the syntax used by Snippet Generator to create that
step.
You can copy and paste the generated code right into your Pipeline, or use it as
a starting point, perhaps deleting any optional parameters that you do not need.
To access information about steps marked with the help icon (question mark),
click on that icon.
== Basic Groovy Syntax for Pipeline Configuration
You typically add functionality to a new pipeline by performing the following tasks:
* Adding nodes
* Adding more complex logic (usually expressed as stages and steps)
To configure a pipeline you have created through the Jenkins UI, select the
pipeline and click *Configure*.
If you run Jenkins on Linux or another Unix-like operating system with a Git
repository that you want to test, for example, you can do that with syntax like
the following, substituting your own name for "joe-user":
[source, groovy]
----
node {
git url: 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn -B verify"
}
----
In Windows environments, use `bat` in place of `sh` and use backslashes as the
file separator where needed (backslashes need to be escaped inside strings).
For example, rather than:
[source, groovy]
----
sh "${mvnHome}/bin/mvn -B verify"
----
you would use:
[source, groovy]
----
bat "${mvnHome}\\bin\\mvn -B verify"
----
Your Groovy pipeline script can include functions, conditional tests, loops,
try/catch/finally blocks, and so on.
Sample syntax for one node in a Java environment that is using the open source
Maven build automation tool (note the definition for `mvnHome`) is shown below:
image::/images/pipeline/pipeline-sample.png[title="Pipeline Sample", 800]
Pipeline Sample (graphic) key:
* `def` is a keyword to define a function (you can also give a Java type in
place of `def` to make it look more like a Java method)
* `=~` is Groovy syntax to match text against a regular expression
* [0] looks up the first match
* [1] looks up the first (…) group within that match
* `readFile` step loads a text file from the workspace and returns its content
(Note: Do not use `java.io.File` methods, these refer to files on the master
where Jenkins is running, not files in the current workspace).
* The `writeFile` step saves content to a text file in the workspace
* The `fileExists` step checks whether a file exists without loading it.
The tool step makes sure a tool with the given name is installed on the current
node. The script needs to know where it was installed, so the tool can be run
later. For this, you need a variable.
The `def` keyword in Groovy is the quickest way to define a new variable (with no specific type).
In the sample syntax discussed above, a variable is defined by the following expression:
[source, groovy]
----
def mvnHome = tool 'M3'
----
This ensures that 'M3' is installed somewhere accessible to Jenkins and assigns
the return value of the step (an installation path) to the `mvnHome` variable.
== Advanced Groovy Syntax for Pipeline Configuration
Groovy lets you omit parentheses around function arguments. The named-parameter
syntax is also a shorthand for creating a map, which in Groovy uses the syntax
`[key1: value1, key2: value2]`, so you could write:
[source, groovy]
----
git([url: 'https://github.com/joe_user/simple-maven-project-with-tests.git', branch: 'master'])
----
For convenience, when calling steps taking only one parameter (or only one
mandatory parameter), you can omit the parameter name. For example:
[source, groovy]
----
sh 'echo hello'
----
is really shorthand for:
[source, groovy]
----
sh([script: 'echo hello'])
----
=== Managing the Environment
One way to use tools by default is to add them to your executable path using the
special variable `env` that is defined for all pipelines:
[source, groovy]
----
node {
git url: 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
env.PATH = "${mvnHome}/bin:${env.PATH}"
sh 'mvn -B verify'
}
----
* Properties of this variable are environment variables on the current node.
* You can override certain environment variables, and the overrides are seen by
subsequent `sh` steps (or anything else that pays attention to environment variables).
* You can run `mvn` without a fully-qualified path.
Setting a variable such as `PATH` in this way is only safe if you are using a
single agent for this build. Alternatively, you can use the `withEnv` step to
set a variable within a scope:
[source, groovy]
----
node {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
withEnv(["PATH+MAVEN=${tool 'M3'}/bin"]) {
sh 'mvn -B verify'
}
}
----
Jenkins defines some environment variables by default:
*Example:* `env.BUILD_TAG` can be used to get a tag like 'jenkins-projname-1' from
Groovy code, or `$BUILD_TAG` can be used from a `sh` script. The Snippet Generator
help for the `withEnv` step has more detail on this topic.
=== Build Parameters
If you configured your pipeline to accept parameters using the *Build with
Parameters* option, those parameters are accessible as Groovy variables of the
same name.
=== Types of Executors
Every Pipeline build runs on a Jenkins master using a *flyweight executor*,
which is an uncounted (because it's a temporary rather than configured) slot. Flyweight executors
require very little computing power. A flyweight executor (sometimes also called
a flyweight task) represents Groovy script, which is idle as it waits for a step to complete.
To highlight the contrast between executor types, some Jenkins documentation calls any regular executor a *heavyweight executor*.
When you run a `node` step, an executor is allocated on a node, which is usually an agent, as soon as
an appropriate node is available.
It is a best practice to avoid placing `input` within a node. The input element pauses pipeline execution to wait for either automatic or manual approval.
By design and by nature, approval can take some time, so placing `input` within a node wastes resources by tying up both the flyweight executor
used for input and the regular executor used by the node block, which will not be free for other tasks until input is complete.
Although any flyweight executor running a pipeline is hidden when the pipeline script is idle (between tasks), the *Build Executor Status* widget on the Jenkins page displays status for both types of executors. If the
one available executor on an agent has been pressed into service by a pipeline build that is paused and
you start a second build of the same pipeline, both builds are shown running on the master, but the
second build displays in the Build Queue until the initial build completes and executors are free to help with further processing.
When you use inputs, it is a best practice to wrap them in timeouts. Wrapping inputs in timeouts allows them to be cleaned up if
approvals do not occur within a given window. For example:
[source, groovy]
----
timeout(time:5, unit:'DAYS') {
input message:'Approve deployment?', submitter: 'it-ops'
}
----
=== Recording Test Results and Artifacts
If there are any test failures in a given build, you want Jenkins to record
them and then proceed, rather than stopping. If you want it saved, you must
capture the `.jar` that you built. The following sample code for a node shows how
(As previously seen in examples from this guide, Maven is being used as
a build tool):
[source, groovy]
----
node {
git 'https://github.com/joe_user/simple-maven-project-with-tests.git'
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn -B -Dmaven.test.failure.ignore verify"
archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true
junit '**/target/surefire-reports/TEST-*.xml'
}
----
(Older versions of Pipeline require a slightly more verbose syntax.
The “snippet generator” can be used to see the exact format.)
* If tests fail, the Pipeline is marked unstable (as denoted by a yellow ball in
the Jenkins UI), and you can browse "Test Result Trend" to see the relevant history.
* You should see Last Successful Artifacts on the Pipeline index page.

View File

@ -1,143 +1,5 @@
---
layout: simplepage
title: "Getting Started with the Jenkinsfile"
layout: refresh
refresh_to_post_id: '/doc/book/pipeline/jenkinsfile'
---
:toc:
== Audience and Purpose
This document is intended for Jenkins users who want to leverage the power of
pipeline functionality. Extending the reach of what was learned from a "Hello
World" example in link:/doc/pipeline/[Getting Started with Pipeline], this
document explains how to use a `Jenkinsfile` to perform a simple checkout and
build for the contents of a repository.
== Creating a Jenkinsfile
A `Jenkinsfile` is a container for your pipeline (or other) script, which details
what specific steps are needed to perform a job for which you want to use
Jenkins. You create a `Jenkinsfile` with your preferred Groovy editor, or through
the configuration page on the web interface of your Jenkins instance.
Using a Groovy editor to code a `Jenkinsfile` gives you more flexibility for
building complex single or multibranch pipelines, but whether you use an editor
or the Jenkins interface does not matter if what you want to do is get familiar
with basic `Jenkinsfile` content.
. Open your Jenkins instance or Groovy editor.
. Navigate to the directory you want (it should be the root directory for your project).
. Use standard Jenkins syntax.
. Save your file.
The following example shows a basic `Jenkinsfile` made to build and test code for
a Maven project. `node` is the step that schedules tasks in the following block
to run on the machine (usually an agent) that matches the label specified in the
step argument (in this case, a node called "linux"). Code between the braces (
`{` and `}` ) is the body of the `node` step. The `checkout scm` command
indicates that this `Jenkinsfile` was created with an eye toward multibranch
support:
[source,groovy]
----
node ('linux'){
stage 'Build and Test'
env.PATH = "${tool 'Maven 3'}/bin:${env.PATH}"
checkout scm
sh 'mvn clean package'
}
----
In single-branch contexts, you could replace .checkout scm. with a source code
checkout step that calls a particular repository, such as:
[source,groovy]
----
git url: "https://github.com/my-organization/simple-maven-project-with-tests.git"
----
== Making Pull Requests
A pull request notifies the person responsible for maintaining a Jenkins
repository that you have a change or change set that you want to see merged into
the main branch associated with that repository. Each individual change is
called a "commit."
You make pull requests from a command line, or by selecting the appropriately
labeled button (typically "Pull" or "Create Pull Request") in the interface for
your source code management system.
A pull request to a repository included in or monitored by an Organization
Folder can be used to automatically execute a multibranch pipeline build.
== Using Organization Folders
Organization folders enable Jenkins to automatically detect and include any new
repositories within them as resources.
When you create a new repository (as might be the case for a new project), that
repository has a `Jenkinsfile`. If you also configure one or more organization
folders, Jenkins automatically detects any repository in an organization folder,
scans the contents of that repository at either default or configurable
intervals, and creates a Multibranch Pipeline project for what it finds in the
scan. An organization folder functions as a "parent," and any item within it is
treated as a "child" of that parent.
Organization folders alleviate the need to manually create projects for new
repositories. When you use organization folders, Jenkins views your repositories
as a hierarchy, and each repository (organization folder) may optionally have
child elements such as branches or pull requests.
To create Organization folders:
. Open Jenkins in your web browser.
. Go to: New Item → GitHub Organization or New Item → Bitbucket Team.
. Follow the configuration steps, making sure to specify appropriate scan
credentials and a specific owner for the GitHub Organization or Bitbucket Team
name.
. Set build triggers by selecting the checkbox associated with the trigger type
you want. Folder scans and the pipeline builds associated with those scans can
be initiated by command scripts or performed at defined intervals. They can also
triggered by project promotion or changes to the images in a monitored Docker
hub.
. Decide whether to automatically remove or retain unused items. "Orphaned Item
Strategy" fields in the configuration interface let you specify how many days to
keep old items, and how many old items to keep. If you enter no values in these
fields, unused items are removed by default.
While configuring organization folders, you can set the following options:
* Repository name pattern - a regular expression to specify which repositories are included in scans
* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise
* Checkout credentials - alternate credentials to use when checking out (cloning) code
Multibranch Pipeline projects and Organization Folders are examples of
"computed folder" functionality. In Multibranch Pipeline projects, computation
creates child items for eligible branches. In Organization folders, computation
populates child items as individual Multibranch Pipelines for scanned
repositories.
Select the "Folder Computation" section of your Jenkins interface to see the
duration (in seconds) and result (success or failure) of computation operations,
or to access a Folder Computation Log that provides more detail about this
activity.
== Basic Checkout and Build
Checkout and build command examples are shown in the code example used by the
introduction above. Examples shown assume that Jenkins is running on Linux or
another Unix-like operating system.
If your Jenkins server or agent is running on Windows, you are less likely to be
using the Bourne shell (`sh`) or
link:http://www.computerhope.com/unix/ubash.htm[Bourne-Again shell] (`bash`) as
a command language interpreter for starting software builds. In Windows
environments, use `bat` in place of `sh`, and backslashes (`\`) rather than
slashes as file separators in pathnames.