How to configure a high availability chassis cluster on a Juniper Networks SRX210 device

Welcome to the Juniper KBTV SRX Series. This video will provide a demonstration on how to configure a chassis cluster on a pair of SRX210 devices. Refer to KB15505 on for a text version of these instructions. This video will provide an overview of cluster
topology, identify cluster prerequisites, show interface and cabling guidelines, provide step-by-step Junos OS commands
to configure the cluster, and use Junos commands
to verify the cluster status. For details on clustering other SRX
series models, refer to KB15505 on, which has links to other
platforms. Here is the topology diagram for this configuration example It is helpful to point out the interfaces that we will be configuring. The fe-0/0/7 interface is always used for the control link when clustering SRX210 devices.
This is system defined. The g-0/0/1 port will be used for the fabric or data link
in our example because it is a gig port. This is user defined. The ge-0/0/0 interfaces will be defined in the untrust zone and form redundancy
interface reth 0.0. The fe-0/0/2 interfaces will be defined in the trust zone and form redundancy interface reth 1.0. Prerequisites: In the SRX
configuration any existing configuration associated with the interfaces that will
be transformed into fxp0
for out of band management and fxp1 for the control link must be removed. For the SRX210, these interfaces are fe-0/0/6 and fe-0/0/7. The fe-0/0/6 interface will be mapped to fxp0 for out of band management and
the fe-0/0/7 interface will be mapped to fxp1 for the control link. The interfaces that are mapped to fxp0 and fxp1are device specific. Next, confirm that the hardware on both
devices is the same. Verify using the command “show chassis hardware” on each node. Next, confirm that the software on both
standalone devices is the same Junos OS version. Verify using “show version”
command on both the nodes. And last, confirm that the license keys
are the same on both devices. There is not a separate license for chassis
cluster; however, both firewalls must have the same identical features and license
keys enabled or installed. Verify using “show system license”
command on both the nodes. Step 1: Connect control and fabric links between
the nodes. The controlling fabric links are back to back connections between the devices forming the cluster.
On the SRX210 device, the control link or fxp1 is fe-0/0/7.
Connect fe-0/0/7 on Device A to fe-0/0/7 on Device B. The fe-0/0/7
port on Device B will be referred to as fe-2/0/7 when clustering is enabled.
For the fabric or data link any open port can be used but a gig port is recommended.
Therefore, for this example ge-0/0/1 has been selected for the fab
link since it is a gig port. Connect ge-0/0/1 on Device A to ge-0/0/1 on Device B.
The ge-0/0/1 port on Device B will be referred to as ge-2/0/1 when clustering is enabled. Step 2: Enable cluster mode and reboot
the devices. Set the devices in the cluster mode with the following command
“set chassis cluster cluster-idnodereboot” on each device and reboot the device. The cluster ID will be the same on both
devices but the node ID should be different. The device to be made primary should be rebooted first. “set chassis cluster cluster-idnodereboot” Step 3: Configure the host names and
management IP addresses Note: Steps 3 through 7 can all be performed on the
primary device and they will be automatically copied over to the
secondary device when a commit is performed in Step 8. Note: The “apply groups” command is
required so that the individual config for each note is set. Step 4: Configure the fab links For this example we will use physical ports ge-0/0/1 from each node since a gig port is recommended. Fab 0 is the data link interface for node 0. For
our example, we’re assigning ge-0/0/1 to fab 0. Fab 1 is the data link for node 1.
We’re assigning ge-2/0/1 to fab 1. Step 5: Configure redundancy groups 0 and 1 Set up the redundancy group 0 for the
routing engine failover properties. Also set up Redundancy Group 1 to define the failover properties for the reth interfaces. All revenue interfaces will
be part of one redundancy group in this example. Step 6: Configure interface monitoring Monitoring the health of the interfaces is one way to trigger redundancy group failover. Step 7: Configure reth interfaces First, make sure that you set up your max
number of redundant interfaces defined. Start configuring the redundant
interface reth1 by assigning the physical interfaces fe-0/0/2 and fe-2/0/2 to reth1. Next, we assign the reth1 interface to
Redundancy Group 1. Then we assign an IP address to the reth1 interface. Now we configure the redundant interface reth0 by assigning the physical
interfaces ge-0/0/0 and ge-2/0/0 to reth0. Then we assign the reth0 interface
to Redundancy Group 1 and we assign an IP address to the reth0 interface. Next add the reth interfaces to the security zones. Step 8: Commit and the changes will be
copied over to Device B This will prepare the basic clustering setting
for both the devices. Verifying chassis cluster status Run the command “show chassis cluster status” to check the current status of the chassis
cluster. When the chassis cluster is up you will see a cluster ID:
in this example it is 1. You will see Redundancy Group 0 with a node 0 and node 1 listed, along with their priorities. You will also see Redundancy Group 1 with a node 0 and node 1 listed. If you do not see something like this and the cluster is
not up, refer to the Chassis Cluster Resolution Guide in the next
troubleshooting section. Troubleshooting For troubleshooting, refer to the Chassis
Cluster Resolution Guide in KB21905 on It contains configuration and upgrade articles as well as resolution guides and step-by-step
troubleshooting for: chassis cluster not up chassis cluster not failing over can’t manage chassis cluster Refer to KB15505 on
for the text version of this video Be sure to check out our other videos at If further assistance is needed please visit our support site at Thanks for watching!

Leave a Reply

Your email address will not be published. Required fields are marked *