Reply
Highlighted
Accepted Solution

Grid test env

STolson
Techie
Posts: 4
9644     0

Wondering what most people do for Grid test environments?

 

1) Do you have a permanent environment or build up something temporary as and when needed?
2) Do you have dedicated phyiscal appliances for test or instead use virtual?
3) How do you go about keeping the Grid settings consistent with production? Do you automate this or rely on change control?

 

Would be great to hear what others are up to.

Thanks,

Steve

Re: Grid test env

Expert
Posts: 81
9645     0

Hi STolson

I'll answer you based in my ~3 year experience working with Infoblox solutions.

1) Do you have a permanent environment or build up something temporary as and when needed?
Unless you have to perform stress tests on a unit, I think there's no reason to have a permanent (test) enviroment. It's quite easy to deploy a brand new grid on VMware vSphere or Workstation (most of Infoblox VMware appliances work well with 2Gb of RAM and thin provisioned disks). To test a feature or build a customer test enviroment based on pre-existing information (Bind or Microsoft DNS, DHCP or IPAM) I frequently virtual machines in VMware Workstation. Please note that Infoblox does not support environments running in VMware Workstation.

2) Do you have dedicated phyiscal appliances for test or instead use virtual?
For lab purposes, I rather use virtual machines instead physical appliances. I do not have the extra work to upgrade NIOS when we have a new release... I don't even need physical networking connectivity (I use Host-Only networks or NAT adapters in Workstation. In addition, it easy to destroy/rebuild (no serial cables to reconfigure network/grid settings after "reset all licenses").

 

3) How do you go about keeping the Grid settings consistent with production? Do you automate this or rely on change control?
I do not need to maintain test environments after building up a production site. But when I need, I simply deploy a new appliance running a compatible NIOS version and restore using a backup file from production grid... always worked fine for me.

 

Hope it helps.

Paulo Costa

Re: Grid test env

STolson
Techie
Posts: 4
9645     0

Thanks for the reply Paulo that's really helpful information.

Re: Grid test env

jauling
Techie
Posts: 2
9645     0

Sorry, I realize I'm responding to a pretty old thread.

 

I'm in the same boat as you guys, I've got a test grid I've setup with only two pairs of HA members (compared to my production which is over 20 pairs of HA members).

 

I was a bit apprehensive in restoring my production grid backup onto my test grid, but I gave it a shot this morning. After the successful restore, I logged into the test grid, and was a bit shocked it looked exactly like my production grid. This included all the 20 HA members, which scared the crap out of me so I shut it down and restored to snapshot (yay for vNIOS). That said, in the test grid, it did show that all these production members weren't working properly, which kind of makes sense to me.

 

Should I have been worried? I want to play it safe, but maybe all I need to do is remove these members and re-add the one HA test member and I'd be ok?

 

Thanks!

Re: Grid test env

[ Edited ]
Adviser
Posts: 213
9645     0

Yes, you effectively created a complete replica of your production environment and that’s exactly the way it should work. That said, you do want to ensure when restoring the data that the NEW GM (the one in the lab that you restored to) cannot talk to your production environment (just in case).

Once the data is restored, you would go in and modify the IP addresses of all of the members in restored Grid to something you can’t route to (like 1.1.1.x). Then you can mimic any of them just by changing IPs to ones in the lab and re-joining your lab gear back into the Grid as this other appliance.

 

 

EDIT:  I recevied additional info that the restore will create a new Grid ID so nothing would happen with the other members.  The only potential issue is if you had enabled SNMP or email warnings (or scraped syslog entries that are forwarded) and saw a bunch of "false" messages about members being offline.  Otherwise you're safe.  I'd still recommend changing the IPs so they are easier to identify what should be offline when not replicating them in the lab.

Re: Grid test env

Expert
Posts: 81
9645     0

Hi, jauling

 

If you're using, for instance, VMware Workstation and all your network adapters are set to "Host-Only" there's nothing to worry about as access to your lab environment will be restricted to your machine only. 

 

 

In fact, you are trying to set up a home lab with less appliances that you have in production and will be normal to see some "Offline" members. In this case, when I need to test, for instance, a DNS service on a appliance inside a HA pair, I just set up a new virtual machine, join it to the grid using the same IP address for that HA member and start testing out whatever I need. 

 

But again, make sure that your lab is isolated from the production grid.

 

Regards, 

Paulo

 

 

 

 

 

 

 

Re: Grid test env

jauling
Techie
Posts: 2
9645     0

Thanks everyone.

 

I just wanted to reply and say everything is fine. My test grid is actually deployed on the same network as one of the production grid members. I realize this is suboptimal and scary to most of you, but its because our production grid also manages the DHCP member that sits in our test lab. Due to this fact, my test grid has dhcp service disabled.

 

After I restored the production backup to my test grid, I had to sequentially go through every DHCP network and unassign all the members. This took quite a long time since I did it via the UI, but I suppose it can be done more efficiently via WAPI, something we may consider next time. I also checked in with Infoblox support, and they gave me the green light in my process too. Other than the pain of removing DHCP members, the test grid is working out great. Thanks everyone for the quick responses, I really appreciate it!

Re: Grid test env

TTiscareno Community Manager
Community Manager
Posts: 244
9645     0

For mass changes like unassigning Grid members, you can leverage the CSV Import feature. First, run a CSV export of all of your networks and ranges (Open CSV Job Manager -> CSV Export under the Data Management tab, leave only the objects that you care about selected and then export the data).

 

Once you have a CSV file with all of your networks and ranges, you can clear out the grid members in that column (being sure not to touch the header row) and remove any columns (including in the header row) that are not required or you do not wish to update in order to keep the file size down and help speed up the import process. After saving this, proceed with importing this file, setting the type of import to "Override".

 

This should help you with automating this type change and be a bit easier than setting up a script to do the same.

Re: Grid test env

Expert
Posts: 173
9645     0

Following the theme of replying to this old thread…..  
The only other issue we have had restoring full backups of production into test is when we waited to long to complete the removals of the extra grid members. We allowed our test grid to go a weekend in a partially “fixed” state and some scheduled jobs ran, backup's and discoveries, etc.   This got the test grid into a state where we could no longer remove the "missing" members or edit some grid wide configurations.  The grid thought that the job was in progress so it would not let you delete the member until the job completed, but the member was never coming back.

In the test environment we were able to recover by taking a backup of test in its broken state, editing the XML of the backup to manually remove the member and the jobs that were in "limbo" and then import that backup back into test.  I would never do that in production, but that allowed us to save the hours of work that we had put in already prepping the test environment.

We now rarely use the full grid backup to move data into test, instead the new CSV import and export functionality usually is all that is needed to get the info into an isolated test area.  It keeps you from having to go through all the work of deleting out all the production members and their related spider web of configurations.

Re: Grid test env

Adviser
Posts: 60
9645     0

FWIW, I did something similar except that I used the exact same IP addressing scheme in the lab environment that was used in production.  Set up a lab environment using VMware's NSX networking which allowed me to use the same networks/IPs as production.  You just have to make sure the lab is on it's own NSX tenant router and that it isn't advertised via any routing protocol outside the lab environment.  For me, the key to making a useful lab environment is to make it fast and easy to rebuild the lab after the temp license runs out.

 

What I did was:

Setup the lab tenant and networks in NSX

Install the vnios appliances in vmware

Assign the appropriate network interfaces to each vnios appliance

Boot and configure the appropriate interface IPs via the NIOS console

Take a snapshot of the vnios appliances *BEFORE* generating the temp license

Now you can generate a temp license on each appliance

Restore production backup to lab environment

Join grid members to grid

 

The reason I take a snapshot before generating the temp license is because when the temp license runs out, I can revert to the snapshot and it takes two minutes to gen new licenses.  Then I join the vnios appliance back to the grid.  I haven't found a way to generate a new temp license once the old one has expired so the snapshot method makes it easy to get a working vnios appliance up without rebuilding from scratch.  Because the snapshot was taken AFTER the IPs were configured, it makes it super easy to rejoin the grid.  Just have to run "set membership" and you're done.

 

The lab networks are the exact same networks as the production network so you cannot advertise them outside the lab tenant in NSX.  In order to access the lab environment, I created one routable lab network inside the lab tenant and I put one Windows and one Linux server on that network.  I use RDP to connect to that single Windows workstation.  From there I can access the lab Infoblox environment via web GUI.  In this lab I cannot test everything but I only need to test upgrades and config changes.  Another important item to note about the lab unix workstation:  I setup an ntp, syslog and email server so the lab environment will have something to point to.

 

After restoring a grid backup to the lab environment, there is very little I have to do to have a functioning environment.  Here is an example of some steps I have to complete:

 

1) Create a bright red banner for the test grid that states "This is the LAB environment!"  It can get confusing when the lab environment has the same names and IPs as production.

 

2) Change the ntp, syslog and email grid setting to point to the Linux server.  

 

3) Change the automated grid backup to point to the Linux server.

 

4) Change reporting to send reports to Linux server.

 

Having the syslogs available on the external syslog server comes in handy.  Having all the grid members sync to the "fake" ntp server is also helpful.   I set up my ntp server (Linux host) to sync from the ESXi host since it has no route to the outside world.  Then I set up the ntp server to use the local clock as it's source.   Reporting also uses the Linux host to store reports.

 

If you decide to setup an Infoblox lab in VMware and you are configuring HA, please be aware the vip will not work (and you will waste a lot of time and pull your hair out) if you do not configure the port-group security proplery.   I googled and found this nugget: (Following text is attributed to author: Christian Elsen)

 

"The port-profile to which the vNIOS HA and LAN ports connect to, have to allow more than one MAC address per vNIC. This can be done by changing the security settings of the port-group to accept “MAC address changes” and “Forged transmits”

 

Sorry for the long post.  Hope this helps someone.

Re: Grid test env

txborah
Techie
Posts: 1
9645     0

We use VLAN tagging on the physical appliances in our production environment. In my experience, that prevents us from restoring our backups to an all virtual test grid. Am I missing something? Can someone suggest an approach that will allow me to build a test grid without purchasing at least one physical appliance to function as Grid Master?

 

Thanks

Re: Grid test env

Expert
Posts: 81
9645     0
Hello txborah,

Vlan Tagging shouldn't be a problem. What exactly did you try to do?

Could you please detail "your experience"?

Regards,
Paulo Costa

Re: Grid test env

Dhiraj
Techie
Posts: 1
9645     0

Hi guys,

 

I want to setup a new test environment of infoblox for testing purpose.

 

1. Is it possible to do it on vmware workstation ? 

 

2. Once infoblox ready on vmware then how will I test whether evertyhing is properly working or not like PC's getting IP from dhcp or DNS service etc ? What are the criterias to test it out ?

 

3. Is it possible to integrete infoblox with GNS3 ?

Re: Grid test env

Expert
Posts: 81
9645     0

Hi Dhiraj,

 

1. It is totally possible to execute labs on VMware Workstation. As I mentinoned earlier, I've been using VMware Workstation for all my time working with Infoblox products.

 

2. There are no specific criteria for tests running on Infoblox virtual machines on VMware Workstation. For lab purposes, you can treat Infoblox VMs running on Workstation like any other virtual machine., but there are some points to consider. 

a. Do not use "Brigded" connections if you are working with DHCP inside your company. This will allow all users inside your Network/VLAN to see your virtual machine as an "actual" DHCP Server and will cause you problems. Use a windows/linux client and Infoblox member on a Host-Only networking and you can test DNS/DHCP simulating a real network without impact your site.

b. VMware Workstation is not supported for production workloads. 

 

3. I'm not the networking guy, but once you can set Infoblox and GNS3 devices on the same network or reachable one to other using routing techniques, you'll be fine. 

 

Hope this helps!

Paulo Costa 

Showing results for 
Search instead for 
Do you mean 

Recommended for You

Demo: Infoblox IPAM plug-in integration with OpenStack Newton