Well it was bound to happen!
Three years now after being asked to migrate DHCP from Windows DHCP server to Cisco Routers, and automating that convertion, it's finally going back the other way.
This time the PowerShell script will read through the file (an exported Cisco Router configuration) and build the Scopes in the Windows DHCP Role. The script will need to be run on the server becoming the DHCP server for those new scopes. The user would need to have administrator privilege to allow the DHCP settings to be made.
The script follows these steps:
- Reads through the configuration file and, using Regular Expressions, finds all DHCP Pools (Scopes), Static Assignments and Exclusions.
- Creates all the Scopes, along with all options found under that Pool in the router configuration file.
- Processes the Exclusions in to each Scope
- Process all static assignments
Still a little bit in the works at the time of this posting, but testing across multiple configurations has found it working well.
The Code Repository can be found on GitHub
Some DHCP Options are being handled as follows.
Code |
Cisco Config |
Option Description |
3 |
default-router |
Default Gateway |
6 |
dns-server |
Domain Nameservers |
15 |
domain-name |
Domain Name |
42 |
option 42 ip |
NTP Servers |
43 |
option 43 hex |
Vendor Specific Option, usually WAP Controller IP |
51 |
lease |
Lease time |
66 |
next-server |
TFTP Server |
66 |
option 66 ip |
TFTP Server |
67 |
bootfile |
Boot filename |
67 |
option 67 ascii |
Boot filename |
Example Cisco Config
ip dhcp excluded-address 10.10.0.1 10.10.1.0
ip dhcp excluded-address 10.10.3.220 10.10.3.223
ip dhcp excluded-address 192.168.0.1 192.168.0.9
!
ip dhcp pool PoolNumber1
network 10.10.0.0 255.255.248.0
update dns both override
dns-server 10.10.255.1 10.10.255.2
domain-name domainname.local
option 42 ip 10.10.249.11 10.10.248.11
default-router 10.10.0.1
lease 8
!
ip dhcp pool PoolNumber2
network 192.168.0.0 255.255.255.0
dns-server 192.168.0.10
option 43 hex f108.0afe.0064
default-router 192.168.0.1
!
ip dhcp pool Device1
host 10.10.1.30 255.255.248.0
client-identifier 01b7.37eb.1f1a.0a
default-router 10.10.0.1
!
ip dhcp pool Device2
host 192.168.0.44 255.255.255.0
client-identifier 0132.c19f.b7f3.3b
This script makes use of the IPv4Calc Module
Ansible and Netbox are not just for the high end data centre systems. They can also be used on networks using small to medium business switches and routers such as the Cisco SMB Product range.
Initial Auditing and OnBoarding
Initally I started with brand new empty Netbox. In which I manually created a base setup adding in each:
- Site
- Patch
- Device model in use
- Prefix, to begin with just the management subnets
Then creating a yaml file host listing to beging with, I ran an Ansible Playbook that then went through that list of devices pulling base device information:
- IP Address (Management)
- Hostname
- Model
- Serial number
- Firmware (This was not initally used)
This was then to record in to Netbox as new devices as well as exported to CSV. After I had the devices in Netbox I could do some base housekeeping and put them in the right sites, patches and rack locations.
Netbox, the Source of Truth
Once the devices were in and housekeeping done Netbox then became the Source of Truth for both our engineers and technicians and also for Ansible. I could remove the hosts yaml file an dpointing AQnsible at Netbox for its inventory I could now allow playbooks to be run against a site and other locations or across the whole group.
Minimum config
One of the frist tasks was to ensure I had all devices, configured to a standard. I hoped that they have been, but over time, without continued audits and checks, things can become a little out. So using Ansible I was able to ensure some defaults are set, such has:
- Disabling of access methods, such as HTTP, Telnet and also HTTPS
- NTP time servers and synchronised time
- Name Servers
- Monitoring service setting (SNMP)
Backup config
Another task for Ansible was to get regular configuration backups for all devices. Running an Ansible Playbook on a daily schedule (Cron) to pull the current configuration and store it on the local file system of the Ansible server. This was then replicated off site over secure protocols.
- name: Gather Facts
gather_facts: no
hosts: device_roles_switch
vars:
output_path: "{{ lookup('env', 'HOME') }}/backups/"
tasks:
## Create backup folder for today
- name: Get date stamp for filename creation
set_fact: date="{{lookup('pipe','date +%Y%m%d')}}"
run_once: true
# Get Switch Config
- name: Get Config
community.ciscosmb.facts:
gather_subset:
- config
- name: Save Config
copy:
content: "{{ ansible_net_config }}"
dest: "{{ output_path }}{{ inventory_hostname }}-{{ date }}.txt"
Finding Trunks and Devices
Another task was find all the trunks between switches & patches and docuemnt them correctly in Netbox. Running an Ansible Playbook to use LLDP from gather facts to then determine th elinks between device, that could then be documented as Netbox cables. Once that was done I also used the Netbox Topology Views plugin to visualise the network.
Once that was done I could also use MAC address searches to determine what ports IP Phones, DAPs and WAPs were connected to among other devices. Since a standard brand of those was used throughout it was only a matter of searching for the manufacturer portion of the MAC address.
What began as a task just to export all Sites from a DatoRMM instance to a CSV file, has started me down the path of building a module to deal with many of the DattoRMM API end points.
Mainly working around the REST APIs that I needed to use to perform certain tasks, I've begun refactoring the export code to become more of an API interface module which may grow to be more useful. Other tasks may take my time away from this, but I will see were it may go.
The original code to export of sites is here DattoRMM-Site-Export. Currently this code pulls all Sites from a DattoRMM environment and exports the basic details in to CSV format file. It removes the system sites called Managed, OnDemand & Deleted Devices, so that you only get an export of the customer base.
Also in the repo is code to set Site variables in DattoRMM sites read in from a csv, as this was part of the next steps I needed to take.
Gets the API URL, Key and Secret from .env or environment variables (example below)
Functions to interact with the DattoRMM API are in the dattormmapi.py Python file.
Main function to do the API requests and export to CSV is in the export_sites.py Python file.
Refactoring to make this a more versatile module to handle interactions with the DattoRMM API will go in to a new GitHub repo, which I'll make public once its formed up some more.