Log aggregation systems can help with troubleshooting and other tasks.
eWEEK: Corelight raises new funding to help grow its network security framework, which is based on the open-source Bro project.
HowToForge: On the Linux command line, you work with several types of files, for example, directories, symbolic links, and stuff like that.
Sometimes our systems are loaded with the same files residing in different locations, eating up our memory resources.
FOSSmint: Chronobreak is new and is geared towards making timing tasks easier and more productive for its users.
Learn how to format date and time to use in shell script or as a variable along with different format example.
After a lot of testing, it's clear there are a few amazing open source game engines
2DayGeek: This tutorial helps you to list/view available package groups In Linux.
Web scraping is a technique which consist in the extraction of data from a web site through the use of dedicated software.
These versatile, free tools are all you need to write, edit, and produce your own books.
opensource.com: Learn the tasty way one open source meetup has evolved into a regular cooking meetup, too.
It's great to treat your infrastructure like cattle—until it comes to troubleshooting.
If you've spent enough time at DevOps conferences, you've heard the phrase "pets versus cattle" used to describe server infrastructure. The idea behind this concept is that traditional infrastructure was built by hand without much automation, and therefore, servers were treated more like special pets—you would do anything you could to keep your pet alive, and you knew it by name because you hand-crafted its configuration. As a result, it would take a lot of effort to create a duplicate server if it ever went down. By contrast, modern DevOps concepts encourage creating "cattle", which means that instead of unique, hand-crafted servers, you use automation tools to build your servers so that no individual server is special—they are all just farm animals—and therefore, if a particular server dies, it's no problem, because you can respawn an exact copy with your automation tools in no time.
If you want your infrastructure and your team to scale, there's a lot of wisdom in treating servers more like cattle than pets. Unfortunately, there's also a downside to this approach. Some administrators, particularly those that are more junior-level, have extended the concept of disposable servers to the point that it has affected their troubleshooting process. Since servers are disposable, and sysadmins can spawn a replacement so easily, at the first hint of trouble with a particular server or service, these administrators destroy and replace it in hopes that the replacement won't show the problem. Essentially, this is the "reboot the Windows machine" approach IT teams used in the 1990s (and Linux admins sneered at) only applied to the cloud.
This approach isn't dangerous because it is ineffective. It's dangerous exactly because it often works. If you have a problem with a machine and reboot it, or if you have a problem with a cloud server and you destroy and respawn it, often the problem does go away. Because the approach appears to work and because it's a lot easier than actually performing troubleshooting steps, that success then reinforces rebooting and respawning as the first resort, not the last resort that it should be.Go to Full Article
News briefs for September 11, 2018.
IRC recently celebrated its 30 birthday. The internet chat system was developed in 1988 by Jarkko Oikarinen at the Department of Information Processing Science of the University of Oulu. See the post on the University of Oulu website for more details.
Microsoft is splitting its Visual Studio Team Services (VSTS) into five separate Azure-branded services, which will be called Azure DevOps, Ars Technica reports. In addition, the Azure Piplines component—"a continuous integration, testing, and deployment system that can connect to any Git repository"—will be available for open-source projects, and "open-source developers will have unlimited build time and up to 10 parallel jobs".
Hortonworks, IBM and Red Hat yesterday announced the Open Hybrid Architecture Initiative, a "new collaborative effort the companies can use to build a common enterprise deployment model that is designed to enable big data workloads to run in a hybrid manner across on-premises, multi-cloud and edge architectures". For the initial phase, the companies will work together to "optimize Hortonworks Data Platform, Hortonworks DataFlow, Hortonworks DataPlane and IBM Cloud Private for Data for use on Red Hat OpenShift, an industry-leading enterprise container and Kubernetes application platform".News IRC Mozilla Arch Linux Microsoft open source DevOps Azure Red Hat Kubernetes Cloud Big Data OpenShift