Linux Journal

Cleaning Your Inbox with Mutt

3 weeks 5 days ago
by Kyle Rankin

Teach Mutt yet another trick: how to filter messages in your Inbox with a simple macro.

I'm a longtime Mutt user and have written about it a number of times in Linux Journal. Although many people may think it's strange to be using a command-line-based email client in 2018, I find a keyboard-driven email client so much more efficient than clicking around in a web browser. Mutt is extremely customizable, which presents a steep learning curve at first, but now that I'm a few decades in, my Mutt configuration is pretty ideal and fits me like a tailored suit.

Of course, as with any powerful and configurable tool, every now and then I learn of a new Mutt feature that improves my quality of life dramatically. In this case, I was using an email system that didn't offer server-side filters. Because I was a member of many different email groups and aliases, this meant that my Inbox was flooded with emails of all kinds, and it became difficult to filter through all the unimportant email I wanted to archive with the emails that demanded my immediate attention.

There are many ways to solve this problem, some of which involve tools like offlineimap combined with filtering tools. With email clients like Thunderbird, you also can set up filters that automatically move email to other folders every time you sync. I wanted a similar system with Mutt, except I didn't want it to happen automatically. I wanted to be able to press a key first so I could confirm what was moving. In the process of figuring this out, I discovered a few gotchas I think other Mutt users will want to know about if they set up a similar system.

Tagging Emails

The traditional first step when setting up a keyboard macro to move email messages based on a pattern would be to use Mutt's tagging-by-pattern feature (by default, the T key) to tag all the messages in a folder that match a certain pattern. For instance, if all of your cron emails have "Cron Daemon" in the subject line, you would type the following key sequence to tag all of those messages:

TCron Daemon

That's the uppercase T, followed by the pattern I want to match in the subject line (Cron Daemon) and then the Enter key. If I type that while I'm in my Mutt index window that shows me all the emails in my Inbox, it will tag all of the messages that match that pattern, but it won't do anything with them yet. To act on all of those messages, I press the ; key (by default), followed by the action I want to perform. So to save all of the tagged email to my "cron" folder, I would type:

Go to Full Article
Kyle Rankin

Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)

3 weeks 6 days ago
by Petros Koutoupis

Part I of this Deep Dive on containers introduces the idea of kernel control groups, or cgroups, and the way you can isolate, limit and monitor selected userspace applications. Here, I dive a bit deeper and focus on the next step of process isolation—that is, through containers, and more specifically, the Linux Containers (LXC) framework.

Containers are about as close to bare metal as you can get when running virtual machines. They impose very little to no overhead when hosting virtual instances. First introduced in 2008, LXC adopted much of its functionality from the Solaris Containers (or Solaris Zones) and FreeBSD jails that preceded it. Instead of creating a full-fledged virtual machine, LXC enables a virtual environment with its own process and network space. Using namespaces to enforce process isolation and leveraging the kernel's very own control groups (cgroups) functionality, the feature limits, accounts for and isolates CPU, memory, disk I/O and network usage of one or more processes. Think of this userspace framework as a very advanced form of chroot.

Note: LXC uses namespaces to enforce process isolation, alongside the kernel's very own cgroups to account for and limit CPU, memory, disk I/O and network usage across one or more processes.

But what exactly are containers? The short answer is that containers decouple software applications from the operating system, giving users a clean and minimal Linux environment while running everything else in one or more isolated "containers". The purpose of a container is to launch a limited set of applications or services (often referred to as microservices) and have them run within a self-contained sandboxed environment.

Note: the purpose of a container is to launch a limited set of applications or services and have them run within a self-contained sandboxed environment.

Figure 1. A Comparison of Applications Running in a Traditional Environment to Containers

This isolation prevents processes running within a given container from monitoring or affecting processes running in another container. Also, these containerized services do not influence or disturb the host machine. The idea of being able to consolidate many services scattered across multiple physical servers into one is one of the many reasons data centers have chosen to adopt the technology.

Container features include the following:

Go to Full Article
Petros Koutoupis

New Raspberry Pi PoE HAT, UBports Foundation Releases Ubuntu Touch OTA-4, OpenSSH 7.8 Now Available, KDE Enhancements and Seagate Media Server SQL Injection Vulnerabilities,

3 weeks 6 days ago

News briefs for August 27, 2018.

Raspberry Pi Trading is offering a Power-over-Ethernet HAT board for the RPi 3 Model B+ for $20 that ships with a small fan. Linux Gizmos notes that the "802.3af-compliant 'Raspberry Pi PoE HAT' allows delivery of up to 15W over the RPi 3 B+'s USB-based GbE port without reducing the port's up to 300Mbps bandwidth." To purchase, visit here.

UBports Foundation has released Ubuntu Touch OTA-4. This release features Ubuntu 16.04 and includes many security fixes and stability improvements. UBports notes that "We believe that this is the 'official' starting point of the UBports project. From the point when Canonical dropped the project until today, the community has been playing 'catch up' in development, infrastructure, and community building. This release shows that the community is soundly based and capable of delivering."

OpenSSH 7.8 was released August 24, 2018, and is available from its mirrors at https://www.openssh.com.

KDE developers continue to enhance KDE. According to Phoronix, the latest usability and productivity improvements include a new Plasmoid that brings easy access to the screen layout switcher, the logout screen will now warn you when other users are still logged in, new thumbnails for AppImages and more.

Several SQL injection vulnerabilities were discovered in the Seagate Media Server. Evidently the public folder facility "can be abused by malicious attackers when they upload troublesome files and media to the folder in the cloud". See the Appuals post for more details about this exploit.

News Raspberry Pi Ubuntu Touch UBports OpenSSH KDE Plasma
Jill Franklin

Intel Reworks Microcode Security Fix License after Backlash, Intel's FSP Binaries Also Re-licensed, Valve Releases Beta of Steam Play for Linux, Chromebooks Running Linux 3.4 or Older Won't Get Linux App Support and Windows 95 Now an App

4 weeks 1 day ago

News briefs for August 24, 2018.

Intel has now reworked the license for its microcode security fix after outcry from the community. The Register quotes Imad Sousou, corporate VP and general manager of Intel Open Source Technology Center, "We have simplified the Intel license to make it easier to distribute CPU microcode updates and posted the new version here. As an active member of the open source community, we continue to welcome all feedback and thank the community."

Intel also has re-licensed its FSP binaries, which are used by Coreboot, LinuxBoot and Facebook's Open Compute Project, so that they are under the same license as the CPU microcode files. According to the Phoronix post, "The short and unofficial summary of that license text is it allows for redistribution (and benchmarking, if so desired) of the binaries and the restricts essentially come down to no reverse-engineering/disassembly of the binaries and respecting the copyright."

Valve announced this week that it's releasing the Beta of a new and improved Steam Play version to Linux. The new version includes "a modified distribution of Wine, called Proton, to provide compatibility with Windows game titles." Other improvements include DirectX 11 and 12 implementations are now based on Vulkan, full-screen support has been improved, game controller support has been improved, and "Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support".

Linux app support will be available soon for many Chromebooks, but a post on the Chromium Gerrit indicates that devices running Linux 3.14 or older will not be included. See this beta news article for a full list of the Chromebooks that won't be able to run Linux apps.

Windows 95 is now an app you can run on Linux, macOS and Windows thanks to Slack developer Felix Rieseberg who created the electron app. See The Verge for more details. The source code and app installers are available on GitHub.

News Intel licensing Valve gaming Chromebooks Windows
Jill Franklin

Organizing a Market for Applications

4 weeks 2 days ago
by Sriram Ramkrishna

The "Year of the Desktop" has been a perennial call to arms that's sunken into a joke that's way past its expiration date. We frequently talk about the "Year of the Desktop", but we don't really talk about how we would achieve that goal. What does the "Year of the Desktop" even look like?

What it comes down to is applications—rather, a market for applications. There is no market for applications because of a number of cultural artifacts that began when the Free Software was just getting up on wobbly legs.

Today, what we have is a distribution-centric model. Software is distributed by an OSV (operating system vendor), and users get their software directly from there via whatever packaging mechanism that OSV supports. This model evolved, because in the early-to-mid 1990s, those OSVs existed to compile the kernel and userspace into a cohesive product. Packaging of applications was the next step as a convenience factor to save users from having to compile their own applications, which always was a hit-or-miss endeavor as developers had different development environment from the users. Ultimately, OSVs enjoyed being gatekeepers as part of keeping developers honest and fixing issues that were unique to their operating system. OSVs saw themselves as agents representing users to provide high-quality software, and there was a feeling that developers were not to be trusted, as of course, nobody knows the state of their operating system better than they would.

However, this model represented a number of challenges to both commercial and open-source developers. For commercial developers, the problem became how to maximize their audience as the "Linux" market consisted of a number of major OSVs and an uncountable number of smaller niche distributions. Commercial application developers would have to develop multiple versions of their own application targeted at various major distributions for fear of missing out on a subset of users. Over time, commercial application developers would settle on using Ubuntu or a compressed tar file hosted on their website. Various distributions would pick up these tar balls and re-package them for their users. If you were an open-source developer, you had the side benefit of distributions picking up your work automatically for you and packaging them if you successfully enjoyed a large following. But they faced the same dilemma.

Go to Full Article
Sriram Ramkrishna

Debian Withholding Intel Security Patches, Linus Torvalds on the XArray Pull Request, Red Hat Transitioning Its Container Registry, Akraino Edge Stack Moves to Execution Phase, openSUSE Tumbleweed Snapshots Released and digiKam 6.0.0 Beta 1 Now Available

4 weeks 2 days ago

News briefs for August 23, 2018.

Debian is withholding security patches for the latest Intel CPU design flaw due to licensing issues. The Register reports that the end-user license file Intel added to the archive "prohibits, among other things, users from using any portion of the software without agreeing to be legally bound by the terms of the license", and Debian is not having it. See also Bruce Perens' blog post on this issue.

Linus Torvalds ranted about the XArray pull request this week on the LKML saying, "For some unfathomable reason, you have based it on the libnvdimm tree. I don't understand at all why you did that. That libnvdimm tree didn't get merged, because it had complete garbage in the mm/ code. And yes, that buggy shit was what you based the radix tree code on. I seriously have no idea why you have based it on some unstable random tree in the first place."

Red Hat is transitioning its customers and product portfolio to a new container registry for Red Hat container images at registry.redhat.io. Red Hat notes that as it makes this transition, "the goal is to have a uniform experience for all of our registries that uses industry standard Open Authorization (OAuth)."

The Linux Foundation announced that its Akraino Edge Stack, "designed to improve the state of edge cloud infrastructure for enterprise edge, OTT edge, and carrier edge networks", is moving from formation to execution. The Akraino Edge Stack seed code will be released to the community this week at the Akraino Edge Stack Developer Summit.

Two openSUSE Tumbleweed snapshots were released this week. Changes include a move to kernel 4.18.0, KVM improvements, Mozilla Firefox 61.0.2 and many more fixes and updates.

digiKam 6.0.0 beta 1 was released recently. The next major version will include "full support of video files management working as photos"; "new tools to export to Pinterest, OneDrive and Box web-services"; "an integration of all import/export web-service tools in LightTable, Image editor and Showfoto"; and many more improvements.

News kernel Linus Torvalds Debian Intel Red Hat Containers The Linux Foundation Akraino Edge Stack digiKam openSUSE Distributions
Jill Franklin

Copy and Paste in Screen

1 month ago
by Kyle Rankin

Put the mouse down, and copy and paste inside a terminal with your keyboard using Screen.

Screen is a command-line tool that lets you set up multiple terminal windows within it, detach them and reattach them later, all without any graphical interface. This program has existed since before I started using Linux, but first I clearly need to address the fact that I'm even using Screen at all prior to writing a tech tip about it. I can already hear you ask, "Why not tmux?" Well, because every time someone tries to convince me to make the switch, it's usually for one of the following reasons:

  • Screen isn't getting updates: I've been happy with the current Screen feature set for more than a decade, so as long as distributions continue to package it, I don't feel like I need any newer version.
  • tmux key bindings are so much simpler: I climbed the Screen learning curve more than a decade ago, so to me, the Screen key bindings are second nature.
  • But you can do vertical and horizontal splits in tmux: you can do them in Screen too, and since I climbed that learning curve ages ago, navigating splits are part of my muscle memory just like inside vim.

So now that those arguments are out of the way, I thought those of you still using Screen might find it useful to learn how to do copy and paste within Screen itself. Although it's true that you typically can use your mouse to highlight text and paste it, if you are a fan of staying on the home row like I am, you realize how much faster and more efficient it is if you can copy and paste from within Screen itself using the keyboard. In fact, I found that once I learned this method, I ended up using it multiple times every day.

Enter Copy Mode

The first step is to enter copy mode from within Screen. Press Ctrl-a-[ to enter copy mode. Once you're in this mode, you can use arrow keys or vi-style keybindings to navigate up and down your terminal window. This is handy if you are viewing a log or other data that has scrolled off the screen and you want to see it. Typically people who are familiar with copy mode just use it for scrolling and then press q to exit that mode, but once you are in copy mode, you also can move the cursor to an area you want to copy.

Copy Text

To copy text once in copy mode, move your cursor to where you want to start to copy and then press the space bar. This will start the text selection, and you'll see the cursor change so that it highlights the text as you then move the cursor to select everything you want to copy. Once you are done selecting text, press the space bar again, and it will be copied to Screen's copy buffer. Once text is copied to Screen's clipboard, it automatically will exit copy mode.

Go to Full Article
Kyle Rankin

Mozilla Announces Major Improvements to Its Hubs Social Mixed Reality Platform, Windmill Enterprise Joins The Linux Foundation, Cloud Foundry Survey Results, New Bodhi Linux Major Release and Red Hat Linux 7.6 Now Available

1 month ago

News briefs for August 22, 2018.

Mozilla announced major improvements in its open-source Hubs, "an experiment to bring Social Mixed Reality to the browser". You now are able to bring videos, images, documents and 3D models into Hubs just by pasting in a link. You can join a room in Hubs and get together with others in Mixed Reality using any VR device or your phone or PC. In addition, any content you upload is available only to others in the room and is encrypted and removed when no longer used. The code for Hubs is available on GitHub.

Windmill Enterprise announces it has joined The Linux Foundation to collaborate on EdgeX Foundry and LF Networking (LF). As part of its work with these projects, "Windmill will incorporate open source, blockchain solutions that enable broader adoption of industrial IoT frameworks into the enterprise. In addition, Windmill will contribute enterprise-class mobile networking security solutions to the largest global open source innovation community." Windmill is also working with the FreeIPA project for identity management. You can learn more here.

According to a recent Cloud Foundry Foundation (CFF) survey, Java and JavaScript are the top enterprise languages. See ZDNet for more information on the survey results.

The Bodhi Team announced a new major release this morning, version 5.0.0. The announcement notes that the new version doesn't include a ton of changes, but instead "simply serves to bring a modern look and updated Ubuntu core (18.04) to the lightning fast desktop you have come to expect from Bodhi Linux."

Red Hat Linux 7.6 beta is now available. According to the Red Hat blog, "Red Hat Enterprise Linux 7.6 beta adds new and enhanced capabilities emphasizing innovations in security and compliance features, management and automation, and Linux containers." See the Release Notes for more information.

News Mozilla VR The Linux Foundation IOT Blockchain Cloud Java JavaScript Programming Distributions Bodhi Red Hat Containers
Jill Franklin

New Intel Caching Feature Considered for Mainline

1 month ago
by Zack Brown

These days, Intel's name is Mud in various circles because of the Spectre/Meltdown CPU flaws and other similar hardware issues that seem to be emerging as well. But, there was a recent discussion between some Intel folks and the kernel folks that was not related to those things. Some thrust-and-parry still was going on between kernel person and company person, but it seemed more to do with trying to get past marketing speak, than at wrestling over what Intel is doing to fix its longstanding hardware flaws.

Reinette Chatre of Intel posted a patch for a new chip feature called Cache Allocation Technology (CAT), which "enables a user to specify the amount of cache space into which an application can fill". Among other things, Reinette offered the disclaimer, "The cache pseudo-locking approach relies on generation-specific behavior of processors. It may provide benefits on certain processor generations, but is not guaranteed to be supported in the future."

Thomas Gleixner thought Intel's work looked very interesting and in general very useful, but he asked, "are you saying that the CAT mechanism might change radically in the future [that is, in future CPU chip designs] so that access to cached data in an allocated area which does not belong to the current executing context wont work anymore?"

Reinette replied, "Cache Pseudo-Locking is a model-specific feature so there may be some variation in if, or to what extent, current and future devices can support Cache Pseudo-Locking. CAT remains architectural."

Thomas replied, "that does NOT answer my question at all."

At this point, Gavin Hindman of Intel joined the discussion, saying:

Support in a current generation of a product line doesn't imply support in a future generation. Certainly we'll make every effort to carry support forward, and would adjust to any changes in CAT support, but we can't account for unforeseen future architectural changes that might block pseudo-locking use-cases on top of CAT.

And Thomas replied, "that's the real problem. We add something that gives us some form of isolation, but we don't know whether next generation CPUs will work. From a maintainability and usefulness POV that's not a really great prospect."

Elsewhere in a parallel part of the discussion, Thomas asked, "Are there real world use cases that actually can benefit from this [CAT feature] and what are those applications supposed to do once the feature breaks with future generations of processors?"

Reinette replied, "This feature is model-specific with a few platforms supporting it at this time. Only platforms known to support Cache Pseudo-Locking will expose its resctrl interface."

To which Thomas said, "you deliberately avoided to answer my question again."

Go to Full Article
Zack Brown

Haiku Release R1/beta1, Flatpack v. 1.0.0, SUSE Updates Their Kernel to Boost Performance on Azure, Debian Receives Mitigation Updates to Vulnerability

1 month ago

Any old school BeOS fans in the audience? If so, the Haiku development team just announced the upcoming release R1/beta1.

Flatpak, the software utility for package deployment in a sandbox environment just cut release version 1.0.0. It comes with performance and stability improvements.

SUSE has had a long history with Microsoft, and it would seem that their relationship with the software giant continues with the Linux distribution's updates to their kernel to boost performance on Azure.

In more L1TF related news, the Debian GNU/Linux 9 (Stretch) distribution just received mitigation updates to this recent and high profile vulnerability.

News
Petros Koutoupis

Freespire 4.0, Mozilla Announces New Fellows, Flatpak 1.0, KDevelop 5.2.4 and Net Neutrality Update

1 month ago

News briefs for August 21, 2018.

Freespire 4.0 has been released. This release brings a migration of the Ubuntu 16.04 LTS codebase to the 18.04 LTS codebase, which adds many usability improvements and more hardware support. Other updates include intuitive dark mode, "night light", Geary 0.12, Chromium browser 68 and much more.

Mozilla announced its 2018–2019 Fellows in openness, science and tech policy today. These fellows "will spend the next 10 to 12 months creating a more secure, inclusive, and decentralized internet". In the past, Mozilla fellows "built secure platforms for LGBTQ individuals in the Middle East; leveraged open-source data and tools to bolster biomedical research across the African continent; and raised awareness about invasive online tracking." See the Mozilla blog for more information and the list of Fellows.

Flatpak 1.0 has been released, marking the first version in a new stable series. Distributions should update as soon as possible. See the GitHub for all the fixes and new features, which include faster installation and updates, a new portal, applications can now be marked as end-of-life, and much more. See also the Flatpak documentation for more information.

KDevelop released version 5.2.4 today. This release contains a few bug fixes and "should be a very simple transition for anyone using 5.2.x currently". You can download it from here.

Reuters reports that 22 states are asking the US appeals court to reinstate Net Neutrality rules. In addition, several internet companies, media and technology advocacy groups filed a separate challenge yesterday to overturn the FCC ruling.

News Freespire Mozilla KDE Net Neutrality KDevelop Flatpak
Jill Franklin

Everything You Need to Know about Linux Containers, Part I: Linux Control Groups and Process Isolation

1 month ago
by Petros Koutoupis

Everyone's heard the term, but what exactly are containers?

The software enabling this technology comes in many forms, with Docker as the most popular. The recent rise in popularity of container technology within the data center is a direct result of its portability and ability to isolate working environments, thus limiting its impact and overall footprint to the underlying computing system. To understand the technology completely, you first need to understand the many pieces that make it all possible.

Sidenote: people often ask about the difference between containers and virtual machines. Both have a specific purpose and place with very little overlap, and one doesn't obsolete the other. A container is meant to be a lightweight environment that you spin up to host one to a few isolated applications at bare-metal performance. You should opt for virtual machines when you want to host an entire operating system or ecosystem or maybe to run applications incompatible with the underlying environment.

Linux Control Groups

Truth be told, certain software applications in the wild may need to be controlled or limited—at least for the sake of stability and, to some degree, security. Far too often, a bug or just bad code can disrupt an entire machine and potentially cripple an entire ecosystem. Fortunately, a way exists to keep those same applications in check. Control groups (cgroups) is a kernel feature that limits, accounts for and isolates the CPU, memory, disk I/O and network's usage of one or more processes.

Originally developed by Google, the cgroups technology eventually would find its way to the Linux kernel mainline in version 2.6.24 (January 2008). A redesign of this technology—that is, the addition of kernfs (to split some of the sysfs logic)—would be merged into both the 3.15 and 3.16 kernels.

The primary design goal for cgroups was to provide a unified interface to manage processes or whole operating-system-level virtualization, including Linux Containers, or LXC (a topic I plan to revisit in more detail in a follow-up article). The cgroups framework provides the following:

  • Resource limiting: a group can be configured not to exceed a specified memory limit or use more than the desired amount of processors or be limited to specific peripheral devices.
  • Prioritization: one or more groups may be configured to utilize fewer or more CPUs or disk I/O throughput.
  • Accounting: a group's resource usage is monitored and measured.
  • Control: groups of processes can be frozen or stopped and restarted.

A cgroup can consist of one or more processes that are all bound to the same set of limits. These groups also can be hierarchical, which means that a subgroup inherits the limits administered to its parent group.

Go to Full Article
Petros Koutoupis

Trinity Desktop Environment New Release, New Read-Only File System Designed for Android Devices, CloudNative Conference Coming Up, Retro Arcade Games Coming to Polycade

1 month ago

Trinity Desktop Environment (TDE) development team just announced the release of TDE R14.0.5. The TDE project began as a continuation of the K Desktop Environment (KDE) version 3.

The Huawei developed EROFS is finding its way into the Linux 4.19 staging tree. EROFS is a read-only file system designed for Android devices.

Mark your calendars for September 12-13: the CloudNative, Docker, and K8s Summit will be hosted in Dallas, Texas this year. To learn more, visit the official conference website.

Tyler Bushnell, the son of Atari co-founder, Nolan Bushnell, is working to bring back retro arcade games with Polycade. Polycade is an arcade machine that is smaller than a cabinet and can hang on a wall.

News
Petros Koutoupis

New Version of KStars, Google Launches Edge TPU and Cloud IoT Edge, Lower Saxony to Migrate from Linux to Windows, GCC 8.2 Now Available and VMware Announces VMworld 2018

1 month 3 weeks ago

News briefs for July 26, 2018.

A new version of KStars—the free, open-source, cross-platform astronomy software—was released today. Version 2.9.7 includes new features, such as improvements to the polar alignment assistant and support for Ekos Live, as well as stability fixes. See the release notes for all the changes.

Google yesterday announced two new products: Edge TPU, a new "ASIC chip designed to run TensorFlow Lite ML models at the edge", and Cloud IoT Edge, which is "a software stack that extends Google Cloud's powerful AI capability to gateways and connected devices". Google states that "By running on-device machine learning models, Cloud IoT Edge with Edge TPU provides significantly faster predictions for critical IoT applications than general-purpose IoT gateways—all while ensuring data privacy and confidentiality."

The state of Lower Saxony in Germany is set to migrate away from Linux and back to Windows, following Munich's similar decision, ZDNet reports. The state currently has 13,000 workstations running openSUSE that it plans to migrate to "a current version of Windows" because "many of its field workers and telephone support services already use Windows, so standardisation makes sense". It's unclear how many years this migration will take.

GCC 8.2 was released today. This release is a bug-fix release and contains "important fixes for regressions and serious bugs in GCC 8.1 with more than 99 bugs fixed since the previous release", according to Jakub Jelinek's release statement. You can download GCC 8.2 here.

VMware announces VMworld 2018, which will be held August 26–30 in Las Vegas. The theme for the conference is "Possible Begins with You", and the event will feature keynotes by industry leaders, user-driven panels, certification training and labs. Topics will include "Data Center and Cloud, Networking and Security, Digital Workspace, Leading Digital Transformation, and Next-Gen Trends including the Internet of Things, Network Functions Virtualization and DevOps". For more information and to register, go here.

News KDE Astronomy Science Google IOT Cloud Windows openSUSE GCC VMware
Jill Franklin

Progress with Your Image

1 month 4 weeks ago
by Kyle Rankin

Learn a few different ways to get a progress bar for your dd command.

The dd tool has been a critical component on the Linux (and UNIX) command line for ages. You know a command-line tool is important if it has only two letters, and dd is no exception. What I love about it in particular is that it truly embodies the sense of a powerful tool with no safety features, as described in Neal Stephenson's In the Beginning was the Command Line. The dd command does something simple: it takes input from one file and outputs it to another file, and since in UNIX "everything is a file", that means dd doesn't care if the output file is another file on your disk, a partition or even your active hard drive, it happily will overwrite it! Because of this, dd fits in that immortal category of sysadmin tools that I type out and then pause for five to ten seconds, examining the command, before I press Enter.

Unfortunately, dd has fallen out of favor lately, and some distributions even will advise using tools like cp or a graphical tool to image drives. This is largely out of the concern that dd doesn't wait for the disk to sync before it exits, so even if it thinks it's done writing, that doesn't mean all of the data is on the output file, particularly if it's over slow I/O like in the case of USB flash storage. The other reason people have tended to use other imaging tools is that traditionally dd doesn't output any progress. You type the command, and then if the image is large, you just wait, wait and then wait some more, wondering if dd will ever complete.

But, it turns out that there are quite a few different ways to get progress output from dd, so I cover a few popular ones here, all based on the following dd command to image an ISO file to a disk:

$ sudo dd if=/some/file.iso of=/dev/sdX bs=1M Option 1: Use pv

Like many command-line tools, dd can accept input from a pipe and output to a pipe. This means if you had a tool that could measure the data flowing over a pipe, you could sandwich it in between two different dd commands and get live progress output. The pv (pipe viewer) command-line tool is just such a tool, so one approach is to install pv using your distribution's packaging tool and then create a pv and dd sandwich:

Go to Full Article
Kyle Rankin

Red Hat's "Road to A.I." Film, Google Chrome Marks HTTP Connections Not Secure, BlueData Launches BlueK8s Project, Linux Bots Account for 95% of DDoS Attacks and Tron Buys BitTorrent

1 month 4 weeks ago

News briefs for July 25, 2018.

Red Hat's Road to A.I. film has been chosen as an entry in the 19th Annual Real to Reel International Film Festival. According to the Red Hat blog post, this "documentary film looks at the current state of the emerging autonomous vehicle industry, how it is shaping the future of public transportation, why it is a best use case for advancing artificial intelligence and how open source can fill the gap between the present and the future of autonomy." The Road to A.I. is the fourth in Red Hat's Open Source Stories series, and you can view it here.

Google officially has begun marking HTTP connections as not secure for all Chrome users, as it promised in a security announcement two years ago. The goal is eventually "to make it so that the only markings you see in Chrome are when a site is not secure, and the default unmarked state is secure". Also, beginning in October 2018, Chrome will start showing a red "not secure" warning when users enter data on HTTP pages.

BlueData launched the BlueK8s project, which is an "open source project that seeks to make it easier to deploy big data and artificial intelligence (AI) application workloads on top of Kubernetes", Container Journal reports. The BlueK8s "project is based on container technologies the company developed originally to accelerate the deployment of big data based on Hadoop and Apache Spark software".

According to the latest Kaspersky Lab report, Linux bots now account for 95% of all DDoS attacks. A post on Beta News reports that these attacks are based on some rather old vulnerabilities, such as one in the Universal Plug-and-Play protocol, which has been around since 2001, and one in the CHARGEN protocol, which was first described in 1983. See also the Kaspersky Lab blog for more Q2 security news.

BitTorrent has been bought by Tron, a blockchain startup, for "around $126 million in cash". According to the story on Engadget, Tron's founder Justin Sun says that this deal now makes his company the "largest decentralized Internet ecosystem in the world."

News Red Hat AI open source Google Chrome Security Blockchain
Jill Franklin

Some of Intel's Effort to Repair Spectre in Future CPUs

1 month 4 weeks ago
by Zack Brown

Dave Hansen from Intel posted a patch and said, "Intel is considering adding a new bit to the IA32_ARCH_CAPABILITIES MSR (Model-Specific Register) to tell when RSB (Return Stack Buffer) underflow might be happening. Feedback on this would be greatly appreciated before the specification is finalized." He explained that RSB:

...is a microarchitectural structure that attempts to help predict the branch target of RET instructions. It is implemented as a stack that is pushed on CALL and popped on RET. Being a stack, it can become empty. On some processors, an empty condition leads to use of the other indirect branch predictors which have been targeted by Spectre variant 2 (branch target injection) exploits.

The new MSR bit, Dave explained, would tell the CPU not to rely on data from the RSB if the RSB was already empty.

Linus Torvalds replied:

Yes, please. It would be lovely to not have any "this model" kind of checks.

Of course, your patch still doesn't allow for "we claim to be skylake for various other independent reasons, but the RSB issue is fixed".

So it might actually be even better with _two_ bits: "explicitly needs RSB stuffing" and "explicitly fixed and does _not_ need RSB stuffing".

And then if neither bit it set, we fall back to the implicit "we know Skylake needs it".

If both bits are set, we just go with a "CPU is batshit schitzo" message, and assume it needs RSB stuffing just because it's obviously broken.

On second thought, however, Linus withdrew his initial criticism of Dave's patch, regarding claiming to be skylake for nonRSB reasons. In a subsequent email Linus said, "maybe nobody ever has a reason to do that, though?" He went on to say:

Virtualization people may simply want the user to specify the model, but then make the Spectre decisions be based on actual hardware capabilities (whether those are "current" or "some minimum base"). Two bits allow that. One bit means "if you claim you're running skylake, we'll always have to stuff, whether you _really_ are or not".

Arjan van de Ven agreed it was extremely unlikely that anyone would claim to be skylake unless it was to take advantage of the RSB issue.

That was it for the discussion, but it's very cool that Intel is consulting with the kernel people about these sorts of hardware decisions. It's an indication of good transparency and an attempt to avoid the fallout of making a bad technical decision that would incur further ire from the kernel developers.

Note: if you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Go to Full Article
Zack Brown

Cooking with Linux (without a Net): Backups in Linux, LuckyBackup, gNewSense and PonyOS

1 month 4 weeks ago

Please support Linux Journal by subscribing or becoming a patron.

It's Tuesday, and it's time for Cooking with Linux (without a Net) where I do some live Linuxy and open-source stuff, live, on camera, and without the benefit of post-video editing—therefore providing a high probability of falling flat on my face. And now, the classic question: What shall I cover? Today, I'm going to look at backing up your data using the command line and a graphical front end. I'm also going to look at the free-iest and open-iest distribution ever. And, I'm also going to check out a horse-based operating system that is open source but supposedly not Linux. Hmm...

Cooking with Linux
Marcel Gagné

Security Keys Work for Google Employees, Canonical Releases Kernel Update, Plasma 5.14 Wallpaper Revealed, Qmmp Releases New Version, Toshiba Introduces New SSDs

1 month 4 weeks ago

News briefs for July 24, 2018.

Google requires all of its 85,000 employees to use security keys, and it hasn't had one case of account takeover by phishing since, Engadget reports. The security key method is considered to be safer than two-factor authentication that requires codes sent via SMS.

Canonical has released a new kernel update to "fix the regression causing boot failures on 64-bit machines, as well as for OEM processors and systems running on Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and other cloud environments", according to Softpedia News. Users of Ubuntu 18.04 and 16.04 LTS should update to the new kernel version as soon as possible. See the Linux kernel regression security notice (USN-3718-1) for more information.

New Plasma 5.14 wallpaper, "Cluster", has been revealed on Ken Vermette's blog. He writes that it's "the first wallpaper for KDE produced using the ever excellent Krita." You can see the full image here.

Qmmp, the Qt-based Linux audio player, recently released version 1.2.3. Changes in the new version include adding qmmp 0.12/1.3 config compatibility, disabling global shortcuts during configuration, fixing some gcc warnings and metadata updating issues and more. Downloads are available here.

Toshiba introduces a new lineup of SSDs based on its 96-layer, BiCS FLASH 3D flash memory. It's the first SSD to use this "breakthrough technology", and "the new XG6 series is targeted to the client PC, high-performance mobile, embedded, and gaming segments—as well as data center environments for boot drives in servers, caching and logging, and commodity storage." According to the press release, "the XG6 series will be available in capacities of 256, 512 and 1,024 gigabytes" and are currently available only as samples to select OEM customers.

News Google Security email Canonical Ubuntu kernel Plasma Desktop KDE qt Audio/Video multimedia Hardware SSDs
Jill Franklin

Building a Bare-Bones Git Environment

1 month 4 weeks ago
by Andy Carlson

How to migrate repositories from GitHub, configure the software and get started with hosting Git repositories on your own Linux server.

With the recent news of Microsoft's acquisition of GitHub, many people have chosen to research other code-hosting options. Self-hosted solutions like GitLabs offer a polished UI, similar in functionality to GitHub but one that requires reasonably well-powered hardware and provides many features that casual Git users won't necessarily find useful.

For those who want a simpler solution, it's possible to host Git repositories locally on a Linux server using a few basic pieces of software that require minimal system resources and provide basic Git functionality including web accessibility and HTTP/SSH cloning.

In this article, I show how to migrate repositories from GitHub, configure the necessary software and perform some basic operations.

Migrating Repositories

The first step in migrating away from GitHub is to relocate your repositories to the server where they'll be hosted. Since Git is a distributed version control system, a cloned copy of a repository contains all information necessary for running the entire repository. As such, the repositories can be cloned from GitHub to your server, and all the repository data, including commit logs, will be retained. If you have a large number of repositories this could be a time-consuming process. To ease this process, here's a bash function to return the URLs for all repositories hosted by a specific GitHub user:

genrepos() { if [ -z "$1" ]; then echo "usage: genrepos " else repourl="https://github.com/$1?tab=repositories" while [ -n "$repourl" ]; do curl -s "$repourl" | awk '/href.*codeRepository/ ↪{print gensub(/^.*href="\/(.*)\/(.*)".*$/, ↪"https://github.com/\\1/\\2.git","g",$0); }' export repourl=$(curl -s "$repourl" | grep'>Previous<. ↪*href.*>Next<' | grep -v 'disabled">Next' | sed ↪'s/^.*href="//g;s/".*$//g;s/^/https:\/\/github.com/g') done fi }

This function accepts a single argument of the GitHub user name. If the output of this command is piped into a while loop to read each line, each line can be fed into a git clone statement. The repositories will be cloned into the /opt/repos directory:

Go to Full Article
Andy Carlson