Ansible and Netbox are not just for the high end data centre systems. They can also be used on networks using small to medium business switches and routers such as the Cisco SMB Product range.
Initial Auditing and OnBoarding
Initally I started with brand new empty Netbox. In which I manually created a base setup adding in each:
- Site
- Patch
- Device model in use
- Prefix, to begin with just the management subnets
Then creating a yaml file host listing to beging with, I ran an Ansible Playbook that then went through that list of devices pulling base device information:
- IP Address (Management)
- Hostname
- Model
- Serial number
- Firmware (This was not initally used)
This was then to record in to Netbox as new devices as well as exported to CSV. After I had the devices in Netbox I could do some base housekeeping and put them in the right sites, patches and rack locations.
Netbox, the Source of Truth
Once the devices were in and housekeeping done Netbox then became the Source of Truth for both our engineers and technicians and also for Ansible. I could remove the hosts yaml file an dpointing AQnsible at Netbox for its inventory I could now allow playbooks to be run against a site and other locations or across the whole group.
Minimum config
One of the frist tasks was to ensure I had all devices, configured to a standard. I hoped that they have been, but over time, without continued audits and checks, things can become a little out. So using Ansible I was able to ensure some defaults are set, such has:
- Disabling of access methods, such as HTTP, Telnet and also HTTPS
- NTP time servers and synchronised time
- Name Servers
- Monitoring service setting (SNMP)
Backup config
Another task for Ansible was to get regular configuration backups for all devices. Running an Ansible Playbook on a daily schedule (Cron) to pull the current configuration and store it on the local file system of the Ansible server. This was then replicated off site over secure protocols.
- name: Gather Facts
gather_facts: no
hosts: device_roles_switch
vars:
output_path: "{{ lookup('env', 'HOME') }}/backups/"
tasks:
## Create backup folder for today
- name: Get date stamp for filename creation
set_fact: date="{{lookup('pipe','date +%Y%m%d')}}"
run_once: true
# Get Switch Config
- name: Get Config
community.ciscosmb.facts:
gather_subset:
- config
- name: Save Config
copy:
content: "{{ ansible_net_config }}"
dest: "{{ output_path }}{{ inventory_hostname }}-{{ date }}.txt"
Finding Trunks and Devices
Another task was find all the trunks between switches & patches and docuemnt them correctly in Netbox. Running an Ansible Playbook to use LLDP from gather facts to then determine th elinks between device, that could then be documented as Netbox cables. Once that was done I also used the Netbox Topology Views plugin to visualise the network.
Once that was done I could also use MAC address searches to determine what ports IP Phones, DAPs and WAPs were connected to among other devices. Since a standard brand of those was used throughout it was only a matter of searching for the manufacturer portion of the MAC address.
This is the second part (see part 1 here) of my migrating a static web site over to Pelican Static Site Generator, after my inital posting Pelican Static Site Generator
Google Analytics
To transfer over Google Analytics web site code to the Pelican created site was as simple as adding it in to pelicanconf.py as a varible and done.
Amended 2023-08-07
With Googles GA-4 a different approach was taken. I added a new varible to hold the GA4 code:
pelicanconf.py
GOOGLE_GA4_ID = your_site_code
Then in the base.html template I added the Google supplied HTML code in the head section, with the Jinja2 varible pulled from pelicanconf.py:
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ GOOGLE_GA4_ID }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ GOOGLE_GA4_ID }}');
</script>
<!-- Google tag (gtag.js) -->
</head>
Hosting and Replacing on Netlify
As the GatsbyJS site was hosted on Netlify which also supports Python and Pelican builds rehosting on Netlify was relatively easy.
First though was getting the source repository ready. Starting with making sure the requirements.txt file exists, as Netfliy build process will be using this to install the required python libraries for the site. I generally keep this up to date as I work, but simply creating it again with the usual command is easy enough:
pip freeze > requirements.txt
Then making sure the publishconf.py file has all the correct settings for the final published site. This was pretty much correct and needed only confirming that it had the correct variables in it for the final site. Overwriting the development variables stored in pelicanconf.py.
A netfliy.toml file in the root of the repo will give Netlify the commands to build the site and location of the built site files to publish:
[build]
command = "pelican content -s publishconf.py"
publish = "output"
also a block to specificy the page (HTML file) to display for a 404 page not found repsonse:
[[redirects]]
from = "/*"
to = "/404.html"
status = 404
Linking in the new repo
After those things are all settled it is just a matter of linking the GitHub repo to the the Netlify site, and Netfliy will do the rest. Building the site and publishing it ready for use.
So is that it?
Well, while I have the Pelican built site now the active builder for the static pages on the site, I still have plenty to work on. Plenty to upgrade including the theme, which is still not quite right, and of course content on the site for other projects and well whatever else is happening in life......
I've been investigating Pelican a static site generator to replace my current GatsbyJS generated site. Im far more at home and familiar with Python and Jinja2, so that was part of my reasoning for taking a look in to it.
So working along the basis of replacing my current GatsbyJS created site and posts with one created by Pelican, the following cover my initial issues and solutions. I've only added brief notes on what was done, without going in to detail as the plugin sites cover correct use in detail.
Redirect old site paths
In investigting moving to a new static site gernerator I also decided to change the sturcure a little. Since not many articles existed at the time of this posting minimal redirects will have to be created.
So how does Pelican handle this on a per article basis?
Enter the pelican-redirect plugin. Installing it is as simple as installling from PyPi
pip install pelican-redirect
Then by adding an additional line in to the metadata of each post
original_url: blog/hacktoberfest-2019.html
Pelican will create an HTML file at the URL location specified that will redirect to the new post location that it will create.
Canonical
In order to add the canonical header entry on articles using the SITEURL variable does not create what you need if RELATIVE_URL = True is set. To get around this and always use a full URL you can copy the SITEURL variable in to CANONICALURL and then use that variable in the base.html template.
pelicanconf.py
SITEURL = "https://jscooksey.github.io/Pelican"
CANONICALURL = SITEURL
base.html
<head>
<title>{{SITENAME}}</title>
{% if article %}
<link rel="canonical" href="{{ CANONICALURL }}/{{ article.url }}" />
{% endif%}
Sitemap
To produce sitemap files for SEO add the pelican-sitemap plugin:
pip install pelican-sitemap
and then adding a SITEMAP variable to the pelicanconf.py as described in the README of the Repo
I also had on my curent site social media sharing links at the bottom of every article, allowing the reader to share the article on there own social media streams.
The share-post plugin does this and again is simply installed using pip
pip install pelican-share-post
Then in the article.html template add link to atricle.share_post attribute
<a href="{{article.share_post\['twitter'\]}}">...</a>
<a href="{{article.share_post\['facebook'\]}}">...</a>
<a href="{{article.share_post\['linkedin'\]}}">...</a>
Adding in Atom (or RSS) feeds is as easy as changing a few options, as this is built in to Pelcon. Changing a few options in the pelicanconf.py
FEED_MAX_ITEMS = 20
FEED_ALL_ATOM = "feeds/all.atom.xml"
CATEGORY_FEED_ATOM = "feeds/{slug}.atom.xml"
TRANSLATION_FEED_ATOM = None
AUTHOR_FEED_ATOM = None
AUTHOR_FEED_RSS = None
Code Highlighting
Markdown code highlighting is processed ultimately through Pygment which can be personailsed but has some builtin styles. Examples are on the Pygment site here and css files for these can be copied from the repo richleland/pygments-css.
You can copy the CSS file of your choice to your themes static folder (eg static/css/pygment.css) and then import that in the base.html
@import url(pygment.css);
Markdown code blocks can then be used and refer to the type of code inside them.
Conclusion
This is as far as I've gotten so far in working in Pelican and this Pelican created site is initially hosted on GitHub Pages at the URL https://jscooksey.github.io/Pelican/
I'll as I figure things out further, with the intention this will replace the primary site hosted at my domain.