Weeks 1 and 2

6.1.15
A Second Encounter with CentOS 7 (Oracle Linux 7)

Our first goal was to choose an Operating System to attempt to get Puppet configured on. After doing some research and coming to the realization that OVM (Oracle Virtual Machine) appears to offer more flexibility than KVM (Kernel Virtual Machine), I will initially work towards configuring Puppet within Oracle Unbreakable Linux 7. While the OS may be incredibly secure, it is also somewhat more challenging to complete simple tasks. For example, the GUI and CLI installers both fail to force the end user to choose default settings, and if you forget to enable or change a settings prior to installing, it becomes that much harder to figure out what it is you need and how to do so with only a command line. Additionally, the network adapters appear to be disabled by default. After getting a proper Oracle 7 Unbreakable ISO from Oracle, we used a “mkisoof” command to turn a flash drive into an Oracle 7 installer. We were able to install Oracle 7 using the Server w/GUI options, however, the system typically boots into CLI and does not give you a GUI until you run the window manager from a non-root account by typing 'startx.' I am opting towards GUI, and using terminal when necessary.

6.2.15
The Beginnings of My Puppeteer Position

Now that the OS is fully installed, we will begin looking into setting up Puppet. A few questions have come up as a result of this, but I believe a master/agent scheme would work much better than a standalone puppet configuration, as it would prevent other servers from having to allocate additional resources. To compensate for the increased server load on the several remaining master servers, we could implement some instance of load balancing.

It will be important to remember to forward port 8140 (it did not state if it needed TCP, UDP, or both, but a Wireshark dump could theoretically show us, maybe both can be forwarded) for both Masters and Agents of Puppet so the hostnames which will be decided upon will be easily resolvable. The default hostname should be 'puppet', and perhaps we could use a numeric naming scheme (depending on how many servers are going to end up being tied together with Puppet, otherwise we could use actual names or something less cryptic).

The network time protocol (NTP) should be enabled to ensure that any requests or certificates that are sent or received from other servers/agents can be responded too, and will not be out of date or expired, as this will cause issues. By default, it appears as though NTP is enabled in Oracle Linux 7. This can be done by running “sudo yum install ntp” followed by a “sudo systemctl start ntpd” then “sudo systemctl enable ntpd”.

Out of personal preference, I installed Chrome, and updated both Chrome and Firefox with the extentions: Adblock Plus, uBlock, and Ghostery. This improved web-browsing by preventing the computer from connecting to some advertising trackers and domains which improves page rendering time.

The first lab computer I am testing this one has a hostname of “oracleLabTest.manhattan.edu”. It is not yet reachable by hostname, but I can ping it via IP. I am guessing this is due to the DHCP setup and the fact that our DNS servers/DHCP don't have this machine set a static IP so that the hostname is unknown on the network.

I am unsure as to which paths and nodes would need to be plugged into the Puppet configurations in order for the hostname to be resolvable. Prior to this, we still need to determine how the server should be setup, how it will serve information, how many machines will be allocated to this project, how the load balancing between these machines should work (ideally), and how the SSL certificates work between the Server(s) and Agents.

6.4.15 Untangling the Strings

I have finished the Manhattan College Server Checklist and will continue attempting to get Puppet configured on this test PC in the Linux Lab. Created a Certificate Authority (CA) certificate on the puppet server, changed the default hostnames in the configuration files to match the machines, and will now need to setup some form of web server. On Puppet's site they mention that the built in web server that Puppet uses is not truly meant for production environments as it does not support concurrent connections (about 10+ will cause errors/issues). Following Puppet's install guide, I will use their recommendation of Passenger and Rack, which I believe are apache modules with some RubyRails or Rack applications in an apache/httpd server. For some reason the documentation for the installation appears to be missing a few bits and pieces of information, but after watching through several YouTube videos and checking out installation guides for older version of Puppet, I was able to begin setting up Rack. One of the videos I saw which compared Chef to Puppet claimed that Chef is most used in the United States, and that Puppet is predominantly used everywhere else. I thought this was interesting, and if for some reason am completely unable to make any headway with Puppet, will consider it as another alternative. I am not positive that I am doing this correctly, but if we can setup the Master server, I will be able to test more elements of it. Currently the process (Puppet Master) is running, but should not be responding to any requests, or even know how to receive them. Once I have a better understanding of how Puppet works/is set up, I will talk with Tom, review what we have, and see what suggestions/room for improvement there would be should we get this into the datacenter. After diagnosing my apache related issues I have realized that CentOS already has httpd enabled, however, you need to use the “systemctl start httpd.service” and “systemctl enable httpd.service” commands to be able to utilize the features. These two commands will officially enable the apache server. What threw me off regarding this was the fact that there WAS an apache process running on the system, however, there was only one of them. Once the server was enabled and I ran a “ps aux | grep apache” command only to see that there were now 7 processes with the term 'apache' in them. Viewing localhost via a web browser on the server confirms that the Apache 2 server is up. It appears as though the default Apache page is accessible via the hostname/IP of the server from within the network (I was able to connect to it and load the same page from my laptop). Now we need to get Puppet to utilize Rack over the Apache interface. I may need to go back and add firewall exceptions for Puppet at some point, but we are far from ready to test this. Or are we? I have absolutely no idea.

I believe I have installed Puppet on a single machine and have it functioning properly, but I cannot add my node (laptop) to the server. I think this may be firewall related, as I know I can ping the laptop from the server, and also the server from the laptop. However, when I perform the command “puppet agent –server 149.61.33.168 –waitforcert 60 –test” I get an error stating that there is 'No route to host' for that IP address on port 8140. I have enabled both TCP and UDP on CentOS 7. (Again, or at least I think I did. It asks you to choose either TCP or UDP in the CLI parameters, and both did not work, so I entered port 8140/tcp and port 8140/udp as two different steps.) I will look into doing the same in Ubuntu (laptop's OS) and if that doesn't work, I will ask for suggestions or see if anything would need to be changed on the network. I had to change the hostname in the 'puppet.conf' file so that it contained no upper-case letters. This is something that anecdotally I found annoying. I was taught most coding should follow the convention that the first word/letter is lowercase, and any word that follows should also begin with a capitalized letter to ensure easier reading. Since I am used to this naming convention as opposed to using hyphens and periods to separate things, I am simply in an adjusting period (hopefully) in terms of coding and this is something that will likely become natural over time.

UPDATE 2 BELOW:

6/5/15

I began today by investigating into the status of port 8140 on the Puppet Server. Some quick Google-Fu shows me that the telnet command can be instructed to use a specific port to telnet into a server. First, I wanted to be certain the port was forwarded from the server. The command used to do this was “firewall-cmd –zone-dmz –add-port=8140/tcp –permanent” and once that is successful, reloading the firewall by typing “firewall-cmd –reload.” By using the command “telnet 149.61.33.168 8140” from my laptop, I was able to connect successfully, which proves that the port is open via TCP. Perhaps earlier when I did the TCP and then the UDP the UDP rule overwrote the setting. Looks like the server is seeing the incoming request but the laptop does not appear to get the certificate back from the server once it has been signed via CLI (I believe it would be best for security to leave auto-signing of nodes off, as all you need to do to have an agent request a cert is to type “puppet agent –server HOSTNAME/IP -t –waitforcert 60” then go over to the server (or SSH in) and type “puppet cert generate HOSTNAME/IP”. It looks like I can telnet into the server, but the server cannot telnet into the laptop, maybe the port issue remains the same as I have not re-applied the firewall port change to the laptop yet. I do not know why but by default in Ubuntu 15.04 there should not be any active firewalls or ports that are restricted, but I am still unable to properly speak to the laptop on port 8140. I am aiming to get this initially working with 1 server, and attempt to learn about Puppet's Manifests, Catalogs, and lists of facts, which all act as components to be configured. Thinking about this now, it seems as though Puppet wouldn't do anything until such rule sets or configurations would be setup, unless it auto-detects the repositories and packages of the OS based on an OS name/string, and cross-checks them with another server of some sort, which I do not think it does. I was also accidentally able to create a certificate for both the hostname AND the IP address on the Ubuntu laptop, which hasn't made understanding this any easier. I realized I forgot to include a few important command line inputs which enabled the firewall on Oracle Linux and allowed us to open ports. Firstly we added the HTTP and HTTPS services after bringing up the firewall. This was done by typing “sudo firewall-cmd –permanent –add-service=http” as well as the same command with https. It appears as though for security reasons a fair number of services are disabled by default, but can be viewed by typing “sudo firewall-cmd –get-services”.

6/8/15

 I began today by ensuring that both the server and agent could still communicate. It appears as though the telnet/open port on the server has either been closed or perhaps the agent or daemon is not functioning properly. I've edited the /etc/puppet/puppet.conf file on the agent so it should know the IP and hostname of the server, and to check in with the server ever 5 minutes or so. I will look into why I can no longer telnet over a known good port. The only 2 variables that may have changed would be the IP of the lab machine I am working with, as on Friday Tom was able to set a static lease to this IP, so I would be able to do further testing on a smaller environment. Another quick Google search proves that the telnet service and/or puppet may not be running on port 8140, but it should be automatically set to start on boot. A quick “ps aux -e | grep puppet” shows that the server is in fact NOT running. I have no clue how/why this is the case, as everything was functioning, and I do not believe the system has been used since then, but I can't know that for sure. I have entered the command “sudo puppet master –daemonize –verbose and was able to restart the server to hopefully get some error log or output. Made some changes to both the agent and masters puppet configurations, but nothing is working yet. I removed the older SSL certs and have opted to make new ones just to be able to rule that out as a variable. During the process of creating new certificates, the PID file in “/var/run/puppet/master.pid” could not be created. Looks like this is because I forgot to kill the previously/currently running puppet processes. After ending the process, re-deleting the SSL certificates and re-opening Puppet, I have regenerated the main certificate for the server. This time there were no errors. Since the server talks over HTTP using the hostname puppet01.manhattan.edu and it can be reached from inside the network to see the base Apache page, I have changed the hostnames in the /etc/puppet/puppet.conf file. If I can successfully create a certificate over the network and have the Pupper server approve of it, I should (in theory) have to come up with some sort of default system configuration for my laptop (first node I am using to test) and if I can plug that into Puppet, it should work, provided I haven't done anything horrendously wrong thus far. Now that I have reset the certificates, once I bring the server up I get errors complaining that the laptop is no longer reachable, likely because the certificate it is looking for has been deleted, but it is still getting requests every update interval. Even as a superuser, the command “puppet cert clean HOSTNAME” with the hostname of the laptop fails, because it cannot find the serial number for the certificate. This is a bit frustrating. Same error occurs if an IP address is used. I still have not found a solution to this online yet. If they cannot be removed, I guess I will simply re-create a new one for the laptop, provided it is possible and that the previous certificates won't conflict with it as they are all based on the same system/node. One of the most frustrating parts of this project thus far has simply been using a Red Hat based OS, as there appear to be very few guides that will help you get a server setup properly. The documentation on the Puppet site could be better as well. The one guide I found mentioned to check /var/opt/lib/pe-puppet/client_data/catalog/ to see if any files were stored in there, but the folder did not exist on this system. Several other guides mention checking /etc/puppetlabs, but this folder does not exist either. I have already “sudo rm -rf” the /etc/puppet/ssl/* directory, and still receive this error.

After wasting more time trying to figure out why the clean command flag wouldn't work, I discovered puppet had a “revoke” flag, which appears to work for removing the nodes I did not mean to have on there. By first revoking all of the improperly assigned SSL keys and then cleaning them I was able to successfully remove the bad certificates. After resetting the services, I was able to get my laptop to request another certificate and got it signed on the server by using “puppet cert generate HOSTNAME” where I used the laptop's hostname. The certificate was awaiting a signature, and was successfully signed on the server. So at this point if there was some form of configuration for Puppet to keep this system held to, I believe it would be doing so. I will need to try to figure that out next.

6/9/15-6/12/15

I have been looking into the differences between: facts, manifests, modules, and catalogs which are used by Puppet in order to detect, determine, and deploy software to that machine based on its configuration. It appears as though facts are obtained by a program Puppet uses called Facter, which resides on both master and agent puppet nodes. Facter holds information about all of your machines hardware. It contains useful network, processor, graphics, software versions, OS type, and other relevant information. I believe a manifest is collection of files which are included in a distribution of software. Catalogs should be more like an overview of the setup, containing groups of similar nodes and their configurations, while utilizing manifests appropriately with the facts obtained from facter based on the setup. Modules are self-contained bundles of code and data which   It appears as though you cannot have one without the others, as it would fall short of providing any tangible results. Once we determine how many nodes/server we are going to be using, we may be able to play around with the Enterprise version of Puppet. Turns out that for the first 10 nodes the enterprise version is free, and makes configuring both the master and agent machines significantly easier as it can be done through a browser based GUI. However, this will depend on the scale of the project, and may not be worth looking into for production use, but it could be useful to gain a better understanding of what a correctly configured /etc/puppet/puppet.conf file should look like without having to fiddle with it one line of code at a time. Before I can begin creating any template or manifest files, I will first need to determine the types of servers we intend to use puppet on, what packages and dependencies  we need to use with those servers, and attempt to create a manifest file for a machine which will ensure that whichever daemons need to be running will be, with an appropriate check-in interval (say 5-10 minutes). Prior to this project I thought manifests were not a feature of unix/linux, but a Puppet specific feature. I incidentally went to install Steam on my laptop and was somewhat surprised to notice the CLI was mentioning a manifest. This led me to think about how Puppet should work, and realize that this is simply a group of files.

I am not sure what the best way to proceed from here is without getting input from someone. Once we have a better understanding of the specific machines, services, packages, and size of our Puppet show is, we will be able to start creating manifests based on these details.

Just found an interesting slideshow from Puppet labs website. Things I Learned While Scaling to 5000 puppet Agents Slide 5 of 36 looked VERY promising, and depending on how many nodes we would like to use, may give us a much better idea of how many resources to allocate.

Next PostNewer Post Home

0 comments:

Post a Comment