Reply

Reading an exported Infoblox CSV file in Python

[ Edited ]
Adviser
Posts: 33
3536     3

I haven't had the opportunity to post in this forum for a while, so maybe this has already been covered by someone else. However, just in case it hasn't been:

 

I recently had occasion to write a Python script that would read in a CSV file produced by the Infoblox global CSV export function (e.g., as invoked from Data Management > IPAM > CSV Job Manager > CSV Export > +). The problem with working with such files in Python (or other languages) is that they can mix different types of data in the same file. For example, an export might produce a CSV file containing data rows for both networks and network containers, where the different types of data rows have different types of fields in each column position. (See the example below.)

 

So I wanted to find a way that a Python script could read in an arbitrary Infoblox-exported CSV file. (In other words, do the same thing that the Infoblox CSV import function does.)

 

For example, suppose that you have a file global-csv-export.csv to which you've exported all records for IPv4 networks and network containers, and you want to read the file and print out the network or network container address along with any comment included for the network or network container. Note that in the file below the network address ("address*") is in the second field for both networkcontainer and network data rows, but the comment field is the eighth field for networkcontainer rows but the ninth field for network rows. So you can't just use the field positions unless you want to do a lot of manual hard-coding.

 

header-networkcontainer,address*,netmask*,always_update_dns,boot_file,boot_server,broadcast_address,comment,ddns_domainname,ddns_ttl,deny_bootp,discovery_exclusion_range,discovery_member,domain_name,domain_name_servers,enable_ddns,enable_discovery,enable_option81,enable_pxe_lease_time,enable_threshold_email_warnings,enable_threshold_snmp_warnings,enable_thresholds,generate_hostname,ignore_client_requested_options,is_authoritative,lease_scavenge_time,lease_time,mgm_private,network_view,next_server,option_logic_filters,pxe_lease_time,range_high_water_mark,range_high_water_mark_reset,range_low_water_mark,range_low_water_mark_reset,recycle_leases,routers,threshold_email_addresses,update_dns_on_lease_renewal,update_static_leases,zone_associations,EA-Site
networkcontainer,192.168.0.0,16,,,,,,,,,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,
networkcontainer,10.0.0.0,8,,,,,Demo,,,,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,Maryland
networkcontainer,10.0.0.0,16,,,,,Demo 2,,,,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,Virginia
header-network,address*,netmask*,always_update_dns,basic_polling_settings,boot_file,boot_server,broadcast_address,comment,ddns_domainname,ddns_ttl,deny_bootp,dhcp_members,disabled,discovery_exclusion_range,discovery_member,domain_name,domain_name_servers,enable_ddns,enable_discovery,enable_option81,enable_pxe_lease_time,enable_threshold_email_warnings,enable_threshold_snmp_warnings,enable_thresholds,generate_hostname,ignore_client_requested_options,is_authoritative,lease_scavenge_time,lease_time,mgm_private,network_view,next_server,option_logic_filters,pxe_lease_time,range_high_water_mark,range_high_water_mark_reset,range_low_water_mark,range_low_water_mark_reset,recycle_leases,routers,threshold_email_addresses,update_dns_on_lease_renewal,update_static_leases,vlans,zone_associations,EA-CMP Type,EA-Cloud API Owned,EA-Name,EA-Subnet ID,EA-Subnet Name,EA-Tenant ID
network,192.168.0.0,255.255.254.0,,,,,,Home network (current),,,,"m1.example.com,m2.example.com",False,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,192.168.1.1,,,,,,,,,,,
network,192.168.101.0,255.255.255.0,,,,,,Main LAN (future),,,,m1.example.com,False,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,,,,,,,
network,192.168.102.0,255.255.255.0,,,,,,Wireless devices (future),,,,m1.example.com,False,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,,,,,,,
network,192.168.103.0,255.255.255.0,,,,,,Grid communications (future),,,,m1.example.com,False,,,,,,,,,False,False,,,,,,,False,default,,,,95,85,0,10,,,,,,,,,,,,,

 

Here's an example program that will do this in a general way, without the need for any hard-coding. The ib_csv_reader() function does the work of figuring out which rows in the file correspond to header definitions, and then using those definitions to read in each of the data rows into a Python dictionary (one per row).

 

The main part of the program then uses the ib_csv_reader() function to create an object ib_reader that can be used to iterate over the CSV file one row at a time. Since each row is returned as a Python dictionary, the program can simply reference the values in the row using the field names, without having to worry about which field is in which column.

 

import csv
import sys


def ib_csv_reader(csv_reader):
    """Read Infoblox format CSV file, yield dict of values in row."""

# csv_reader must be an iterator returned by csv.reader. It # takes care of extracting field values between delimiters and # dealing with quoted field values, so we don't have to.
for row in csv_reader: # Each row is a list of field values, with the first field # being something like "header-network" or "network".
if row[0].startswith('header-'): # This is a header row with field names for data rows. # The first field has the object type (e.g., "network").
object_type = row[0].split('-')[1]
# The remaining fields are the field names for this type. # Save them as a list in a dict keyed by object type. # NOTE: The fields dict is defined as an attribute of this # function in order to save its value between calls.
try: # Add a new entry to the fields dict if it exists.
ib_csv_reader.fields[object_type] = ['object_type'] + row[1:]
except AttributeError: # No, it doesn't, so initialize the fields dict.
ib_csv_reader.fields = {object_type: ['object_type'] + row[1:]}
# Go on to the next row and see what it is.
else: # This is a data row, with the first field being the # object type. Construct a dict mapping the field names # for this type to the data values. Yield that value to # the caller and then exit.
if row[0] not in ib_csv_reader.fields: sys.exit('No header-{} line in CSV file'.format(row[0]))
yield dict(zip(ib_csv_reader.fields[row[0]], row)) if __name__ == "__main__": with open('global-csv-export.csv', 'r') as csv_file: # csv_file can be used to iterate over the lines of any file.

csv_reader = csv.reader(csv_file)
# csv_reader can be used to iterate over the rows of a CSV file.

ib_reader = ib_csv_reader(csv_reader)
# ib_reader can be used to iterate over the rows of an Infoblox CSV file.

for row in ib_reader: # We look at the 'object_type' field to see whether this row
# contains values for a networkcontainer or a network.

if row['object_type'] == 'networkcontainer':
print('container: {}, {}'.format(row['address*'], row['comment']))

elif row['object_type'] == 'network':
print('network: {}, {}'.format(row['address*'], row['comment']))

 

Running the script above using Python 3 will produce the following output:

 

container: 192.168.0.0, 
container: 10.0.0.0, Demo
container: 10.0.0.0, Demo 2
network: 192.168.0.0, Home network (current)
network: 192.168.101.0, Main LAN (future)
network: 192.168.102.0, Wireless devices (future)
network: 192.168.103.0, Grid communications (future)

 

I tried to comment the program so that it would be intelligible to someone with at least some Python knowledge, but if you have any further questions about it please feel free to ask.

Re: Reading an exported Infoblox CSV file in Python

New Member
Posts: 3
3537     3

This great .. Thanks for sharing !!

 

Question .. is there a API enpoint for CSV Job Manager ? I can not seem to find one in the docs or swagger front end ...

if not will there be one in the future ?

 

Thanks

Showing results for 
Search instead for 
Did you mean: 

Recommended for You

Demo: Infoblox IPAM plug-in integration with OpenStack Newton