Perl script to get output from multiple devices and export

Posts: 16
8180     0

I'm trying to execute a show command across a group of Cisco devices and capture the output. The problem is that the output is contained to each device within the job, so it is cumbersome to click into each device to get the output. I'm writing a custom script to append the output to a local file on the sandbox. This is working out pretty well so far, but I would like to have the script copy the file off to a user-specified destination via scp (or even ftp/tftp). Any ideas on how to make this work? Or is there some other way to accomplish this in NetMRI that I am overlooking?

# Script-Filter:
#    $Vendor eq "Cisco"
# Script-Variables:
#      $command string "Enter command to be run"

use strict;
use warnings;
use NetMRI_Easy 0.6;
require "";

our $command;
my $easy = new NetMRI_Easy({ api_version => 2.5 });

my $device = $easy->device->DeviceName;
my $command_out = $easy->send_command($command);

$easy->log_message("info", "Command Variable set to $command");

 open (MYFILE, '>>data.txt');
 print MYFILE "Device: $device\n\n";
 print MYFILE "Command: $command\n\n";
 print MYFILE "Output:\n\n$command_out\n\n";
 close (MYFILE); 

$easy->log_message("info", "Executed command and logged to data.txt on sandbox");


There are a number of ways to

Posts: 357
8180     0

There are a number of ways to accomplish this. 

First, let's start with what you have so far. Using the job engine, the script will be executed once for each device. So, in order to get it to work this way, there are two more things needed: 1) you need to know when the job is done for all devices; and 2) you need to know how to copy things off.

Devices are done in no specific order. So, one way determine when the last device is done would be to query the job_details API with your job ID, and check that there are none in Pending or Running states. Something like:

my @status = $easy->broker->job_details({ JobID => $easy->batch_id })
my $found_running = 0;
foreach my $s (@status) {
  next if $s->DeviceID == $easy->DeviceID; # we know our instance is not done
  $found_running = 1 if $s->Status =~ /^(Pending|Running)$/

unless ($found_running) {
  #copy file to external server here


Another way to do this would be to run a single external script that uses the Jobs API to run the job, then copy once the job is completely done. But if you do that, you need an external scheduler to run that external script, because external scripts do not run "against" a device. This could be done with cron in the sandbox. 


OK, so, then, how do you send the file? Depends on the destination server. For scp, you have to generate an identity file on the sandbox, then put that entry in the authorized_keys on the destination system. That way, scp will not prompt for a password.

netmri> sandbox login

NOTICE: The inactivity timeout is being disabled temporarily during session.

NetworkAutomation netmri
-bash-4.2# pwd
-bash-4.2# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /sbuser/.ssh/id_rsa
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /sbuser/.ssh/id_rsa.
Your public key has been saved in /sbuser/.ssh/
The key fingerprint is:
1b:ed:eb:2f:0f:1d:52:74:c7:51:20:c3:99:9d:f4:15 root@sandbox
The key's randomart image is:
+--[ RSA 2048]----+
|           .+=+EB|
|           .++ooo|
|            .   .|
|         . .     |
|        S o .    |
|         + o .   |
|        . o .    |
|          .o     |
|         .o+o    |
-bash-4.2# ls .ssh/
authorized_keys  id_rsa


Now, append the contents of to the authorized_keys file of the destination server. Note that the authorized_keys must have restrictive permissions or this won't work.

[jbelamaric@jbelamaric-netmri53 ~]$ ls .ssh/
[jbelamaric@jbelamaric-netmri53 ~]$ cat > .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXQWgJ3/TlSjn9wyYsr56eJQ5WvTjrkrGXt7tbVi8pkVnfNt58S8FPabTmdEMMO6NR37jg3Kb4+NFJcNnBCyPB/GDbMhmWfILZXLc5sq4k/gSEfulFzLYOF1wuXPo3zRb1Z8gVDAM3ylao2aBhd6vHxYiqWeRUtM2KwriaXEMZaK70zIDKfCle+AlXwHsYU3qdzgElL4ujSNsFU3t2lJjV0LUxgw884CdELBCmN2r28FDePymxAMNv3HHYF/m2uaQ+AQk6HxMPGjsC1PlXf/pWGAc8avzDkLMYVHhrlCDsFRlmy2W3L0zzKQGzJQGC3OHv01wS8EIW7c4MnCfrYkt7 root@sandbox
[jbelamaric@jbelamaric-netmri53 ~]$ ls .ssh
authorized_keys  known_hosts
[jbelamaric@jbelamaric-netmri53 ~]$ ls -l .ssh/authorized_keys 
-rw-rw-r-- 1 jbelamaric jbelamaric 394 Feb 25 14:02 .ssh/authorized_keys

[jbelamaric@jbelamaric-netmri53 ~]$ chmod 600 .ssh/authorized_keys 

[jbelamaric@jbelamaric-netmri53 ~]$ ls -l .ssh/
total 24
-rw------- 1 jbelamaric jbelamaric   394 Feb 25 14:02 authorized_keys
-rw-r--r-- 1 jbelamaric jbelamaric 19580 Feb 21 13:33 known_hosts
[jbelamaric@jbelamaric-netmri53 ~]$ 


Now, you should be able to directly ssh from the sandbox to the destination server, without a password:


-bash-4.2# scp -i .ssh/id_rsa test.txt jbelamaric@
NetworkAutomation jbelamaric
test.txt                                                                                                                          100%    6     0.0KB/s   00:00    


So, in your script, you just do:

system("scp -i /sbuser/.ssh/id_rsa data.txt jbelamaric@");



Huh. That first method of

Posts: 357
8180     0

Huh. That first method of checking the statuses doesn't really work well, because there is a race condition between when we query the status and when we check it; that is, the other processes may complete after we have call the job_details->index method and we won't know. So the file may never be copied. The external script is the better solution.

Thanks, this is very helpful!

Posts: 16
8180     0

Thanks, this is very helpful! I really wanted to keep our script repository within the app to prevent the current state of ad-hoc scripts running all over the place. I think I will test some with the job status query and see how it works. Excellent instructions on setting up the ssh keys and scp the file out.

Sure. You can salvage the job

Posts: 357
8180     0

Sure. You can salvage the job status query one by making one "special" and having it be the one that does the file transfer. For example, you could do this by having only the one with max DeviceID do the status checks:


my @status = $easy->broker->job_details({ JobID => $easy->batch_id });

my $max = 0;

foreach my $s (@status) {

  $max = $s->DeviceID if $s->DeviceID > $max;


if ($max == $easy->DeviceID) {

  @status = $easy->broker->job_details({ JobID => $easy->batch_id });

  my $found_running = 0;

  foreach my $s (@status) {

    next if $s->DeviceID == $easy->DeviceID; # we know our instance is not done

    $found_running = 1 if $s->Status =~ /^(Pending|Running)$/


  unless ($found_running) {

  #copy file to external server here



Oh - you will have to put the

Posts: 357
8180     0
Oh - you will have to put the part in the if statement in a loop, sleeping a few seconds between iterations, until it's done.

Another option - CCS ARCHIVE function.

BAndersen Employee
Posts: 15
8180     0


  The CCS "archive" function will send the output to multiple files (1 for each device) and you could cat them afterward.  You can find more details in the CCS Scripting Guide for Network Automation under Tech Docs on the Support site.  Error output will be in the UI for the script execution.



CCS attributes Action-Commands and Trigger-Commands support the ARCHIVE keyword, used to save the output of various CLI commands into one or more files. In its simplest form: 

The ARCHIVE keyword creates a file named device_id-1.log for each device. The files can be viewed in the Files tab of Network Automation ‘s Job Viewer. 

The keyword can be followed by an optional file name and should precede all commands for the archived output: 

stores the output of the show running-config command in the file config.txt.

If a job runs against two or more devices and the script specifies a static file name, all iterations will use the same file name and only one set of data will be present in the Zip archive, containing the last device’s output, which will have overwritten all previous output. At the end of the job, the Zip file contains a file with a single set of data from one device (e.g. the last device that the script was run against). This is not particularly helpful.

Use variables to dynamically specify the name of the file: 

stores the output of the show interface summ command in a series of files representing each of the devices against which the script runs after the script-filter is applied. 

Output from multiple commands can be stored in the same file: 

Two successive ARCHIVE directives in the same attribute store the output of the commands show interface summ and show running-config in the series of files named <$ipaddress>.txt.

At the end of a job, files created using the ARCHIVE keyword are placed in a Zip file. You can view the files and download the Zip file in the Files tab of the Job Viewer. If you use a variable to define the file names generated by the script, each device in the Job has its own separate output file which is present in the Zip file. Action-Commands: ARCHIVE: sh ver Action-Commands: ARCHIVE (config.txt): show running-config Action-Commands: ARCHIVE ($ipaddress.txt): show interface summ Action-Commands: ARCHIVE ($ipaddress.txt): show interface summ ARCHIVE ($ipaddress.txt): show running-config


Showing results for 
Search instead for 
Do you mean 

Recommended for You