Linux Journal

Supporting the NDS32 Architecture

1 month 2 weeks ago
by Zack Brown

Green Hu posted a patch to support the NDS32 architecture. He described the current status as, "It is able to boot to shell and passes most LTP-2017 testsuites in nds32 AE3XX platform."

Arnd Bergmann approved the patch, but Linus Torvalds wanted a little more of a description—an overview of the "uses, quirks, reasons for existing" for this chip, to include in the changelog.

Arnd replied:

The non-marketing description is that this is a fairly conventional (in a good way) low-end RISC architecture that is usually integrated into custom microcontroller and SoC designs, competing with the similar ARM32, ARC, MIPS32, RISC-V, Xtensa and (currently under review) C-Sky architectures that occupy the same space. The most interesting bit from my perspective is that Andestech are already selling a new generation of CPU cores that are based on 32-bit and 64-bit RISC-V, but are still supporting enough customers on the existing cores to invest in both.

And Green also said:

Andes nds32 architecture supports Linux for Andes's N10, D10, N13, N15, D15 processor cores.

Based on the patented 16/32-bit AndeStar RISC-like architecture, we designed the configurable AndesCore series of embedded processor families. AndesCores range from highly performance-efficient small-footprint cores for microcontrollers and deeply-embedded applications to 1GHz+ cores running Linux, covering general-purpose N-series cores for a wide range of computing needs; DSP-capable D-series cores for digital signal control; instruction-extensible E-series cores for application-specific acceleration; and secure S-series cores for best protection of the most valuable.

Our customers together have shipped over 2.5 billion SoCs with Andes processors embedded (including non-MMU IP cores). It will help our customers to get better Linux support if we are merged into mainline.

It looks like there's no controversy over this port, and it should fly into the main tree. One reason for the easy adoption is that it doesn't touch any other part of the kernel—if the patch breaks anything, it'll break only that one architecture, so there's very little risk in letting Green make his own choices about what to include and what to leave out. Linus's main threshold will probably be, does it compile? If yes, then it's okay to go in.

Go to Full Article
Zack Brown

Mozilla's Firefox Nightly Experiment Results, EFF's Back to School Tips, HHVM 3.28 Released, Oracle Solaris 11.4 Now Available and Dropbox Vulnerability Discovered

1 month 2 weeks ago

News briefs for August 29, 2018.

Mozilla posted the results of its planned Firefox nightly experiment involving secure DNS via the DNS over HTTPS (DoH) protocol. The experiment focused on two questions: "Does the use of a cloud DNS service perform well enough to replace traditional DNS?" and "Does the use of a cloud DNS service create additional connection errors?" See the Mozilla Blog for details.

The EFF yesterday posted its Back to School Essentials for Security—great tips whether or not you're currently a student.

HHVM 3.28 was released yesterday. This new release of the open-source virtual machine for executing programs written in Hack and PHP "contains new language features, bugfixes, performance improvements, and improvements to the debugger and editor/IDE support."

Oracle Solaris 11.4 has been released. Scott Lynn, Director of Product Management, Oracle Linux and Oracle Solaris, writes "There have been 175 development builds to get us to Oracle Solaris 11.4. We've tested Oracle Solaris 11.4 for more than 30 million machine hours. Over 50 customers have already put Oracle Solaris 11.4 into production and it already has more than 3000 applications certified to run on it. Oracle Solaris 11.4 is the first and, currently, the only operating system that has completed UNIX V7 certification."

A vulnerability in Microsoft's cloud storage solution Dropbox was discovered recently. According to Appuals, this DLL hijacking and code execution vulnerability affects Dropbox's version 54.5.90, and "a user whose device is undergoing this exploit won't realize it until the process has been exploited to inject malware into the system. The DLL injection and execution runs in the background without requiring any user input to run its arbitrary code."

News PHP Programming Virtual Machines Oracle Mozilla Firefox Security DNS eff Dropbox
Jill Franklin

Creating the Concentration Game PAIRS with Bash

1 month 2 weeks ago
by Dave Taylor

Exploring the nuances of writing a pair-matching memory game and one-dimensional arrays in Bash.

I've always been a fan of Rudyard Kipling. He wrote some great novels and stories, mostly about British colonial-era India. Politically correct in our modern times? Not so much, but still, his books are good fun for readers and still are considered great literature of its time. His works include The Jungle Book, Captains Courageous, The Just So Stories and The Man Who Would Be King, among many others.

He also wrote a great spy novel about a young English boy who is raised as an Indian native and thence recruited by the British government as a spy. The boy's name is the title of the book: Kim. In the story, Kim is trained to have an eidetic memory with a memory game that involves being shown a tray of stones of various shapes, sizes and colors. Then it's hidden, and he has to recite as many patterns as he can recall.

For some reason, that scene has always stuck with me, and I've even tried to teach my children to be situationally aware through similar games like "Close your eyes. Now, what color was the car that just passed us?" Since most of us are terrible observers (see, for example, how conflicting eyewitness accident reports can be), it's undoubtedly good practice for general observations about life.

Although it's tempting to try to duplicate this memory game as a program, the reality is that with just a shell script, it would be difficult. Perhaps you display a random pattern of letters and digits in a grid, then clear the screen, then ask the user to enter patterns, but that's really much more of a game for a screen-oriented, graphical application—not shell scripts.

But, there's a simplified version of this that you can play with a deck of cards: Concentration. You've probably played it yourself at some point in your life. You place the cards face down in a grid and then flip up two at a time to try to find pairs. At the beginning, it's just random guessing, but as the game proceeds, it becomes more about your spatial memory, and by the end, good players know what just about every unflipped card is at the beginning of their turn.

Designing PAIRS

That, of course, you can duplicate as a shell script, and since it is going to be a shell script, you also can make the number of pairs variable. Let's call this game PAIRS.

As a minimum, let's go with four pairs, which should make debugging easy. Since there's no real benefit to duplicating playing card values, it's just as easy to use letters, which means a max of 26 pairs, or 52 slots. Not every value is going to produce a proper spread or grid, but if you aim for 13 per line, players then can play with anywhere from 1–4 lines of possibilities.

Go to Full Article
Dave Taylor

3D-Printed Firearms Are Blowing Up

1 month 3 weeks ago
by Kyle Rankin

What's the practical risk with 3D-printed firearms today? In this opinion piece, Kyle explores the current state of the art.

If you follow 3D printing at all, and even if you don't, you've likely seen some of the recent controversy surrounding Defense Distributed and its 3D-printed firearm designs. If you haven't, here's a brief summary: Defense Distributed has created 3D firearm models and initially published them for free on its DEFCAD website a number of years ago. Some of those 3D models were designed to be printed with a traditional home hobbyist 3D printer (at least in theory), and other designs were for Defense Distributed's "Ghost Gunner"—a computer-controlled CNC mill aimed at milling firearm parts out of metal stock. The controversy that ensued was tied up in the general public debate about firearms, but in particular, a few models got the most attention: a model of an AR-15 lower receiver (the part of the rifle that carries the serial number) and "the Liberator", which was a fully 3D-printed handgun designed to fire a single bullet. The end result was that the DEFCAD site was forced to go offline (but as with all website take-downs, it was mirrored a million times first), and Defense Distributed has since been fighting the order in court.

The political issues raised in this debate are complicated, controversial and have very little to do with Linux outside the "information wants to be free" ethos in the community, so I leave those debates for the many other articles on this issue that already have been published. Instead, in this article, I want to use my background as a hobbyist 3D printer and combine it with my background in security to build a basic risk assessment that cuts through a lot of the hype and political arguments on all sides. I want to consider the real, practical risks with the 3D models and the current Ghost Gunner CNC mill that Defense Distributed provides today. I focus my risk assessment on three main items: the 3D-printed AR-15 lower receiver, the Liberator 3D-printed handgun and the Ghost Gunner CNC mill.

3D-Printed AR-15 Lower Receiver

This 3D model was one of the first items Defense Distributed shared on DEFCAD. In case you aren't familiar with the AR-15, its modular design is one of the reasons for its popularity. Essentially every major part of the rifle has numerous choices available that are designed to integrate with the rest of the rifle, and you can find almost all of the parts you need to assemble this rifle online, order them independently, and then build your own—that is, except for the lower receiver. That part of the rifle is what the federal government considers "the rifle", as it is the part that's stamped with the serial number that uniquely identifies and registers one particular rifle versus all of the others out there in the world. This part has restrictions like you would find with a regular rifle, revolver or other firearm.

Go to Full Article
Kyle Rankin

Kali Linux's New Version 2018.3, Open-Source License War, Lenovo Announces Five New Android Tablets, Google Releases Open-Source Reinforcement Learning Framework and KD Chart Update

1 month 3 weeks ago

News briefs for August 28, 2018.

Kali Linux recently announced its third release of 2018. Version 2018.3 features several new tools: idb, an iOS research/penetration-testing tool; gdb-peda, Python Exploit Development Assistance for GDB; datasploit, OSINT Framework to perform various recon techniques; and kerberoast, Kerberos assessment tools. See the Change Log for more information on all the changes, and download Kali from here.

A new open-source license war has begun. According to the ZDNet, Redis Labs has added the Commons Clause to its license for Redis, the open-source, in-memory data structure store that "enables real-time applications such as advertising, gaming financial services, and IoT to work at speed". This license "forbids you from selling the software. It also states you may not host or offer consulting or support services as 'a product or service whose value derives, entirely or substantially, from the functionality of the software'".

Lenovo has released a new generation of Android tablets for home and entertainment use: "the Lenovo Tab E7, Lenovo Tab E8, Lenovo Tab E10, as well as new mainstream and premium tablets, the Lenovo Tab M10 and Lenovo Tab P10". See the press release for more details on these affordable, thin and light tablets.

Google released an open-source reinforcement learning framework based on TensorFlow for training AI models. It's available on GitHub. Venture Beat quotes Pablo Samuel Castro and Marc G. Bellemare, researchers on the Google Brain Team, on the platform: "Inspired by one of the main components in reward-motivated behavior in the brain and reflecting the strong historical connection between neuroscience and reinforcement learning research, this platform aims to enable the kind of speculative research that can drive radical discoveries."

KD Chart has a new release. The latest release of this open-source Qt component for creating business charts builds with modern Qt versions (up to Qt 5.10), improves tooltip handling and now "includes Stock Charts, Box & Whisker Charts and the KD Gantt module for implementing ODF Gantt charts into applications". You can get it from here.

News Kali Linux Security licensing Redis Lenovo Android Mobile Google AI qt
Jill Franklin

Cleaning Your Inbox with Mutt

1 month 3 weeks ago
by Kyle Rankin

Teach Mutt yet another trick: how to filter messages in your Inbox with a simple macro.

I'm a longtime Mutt user and have written about it a number of times in Linux Journal. Although many people may think it's strange to be using a command-line-based email client in 2018, I find a keyboard-driven email client so much more efficient than clicking around in a web browser. Mutt is extremely customizable, which presents a steep learning curve at first, but now that I'm a few decades in, my Mutt configuration is pretty ideal and fits me like a tailored suit.

Of course, as with any powerful and configurable tool, every now and then I learn of a new Mutt feature that improves my quality of life dramatically. In this case, I was using an email system that didn't offer server-side filters. Because I was a member of many different email groups and aliases, this meant that my Inbox was flooded with emails of all kinds, and it became difficult to filter through all the unimportant email I wanted to archive with the emails that demanded my immediate attention.

There are many ways to solve this problem, some of which involve tools like offlineimap combined with filtering tools. With email clients like Thunderbird, you also can set up filters that automatically move email to other folders every time you sync. I wanted a similar system with Mutt, except I didn't want it to happen automatically. I wanted to be able to press a key first so I could confirm what was moving. In the process of figuring this out, I discovered a few gotchas I think other Mutt users will want to know about if they set up a similar system.

Tagging Emails

The traditional first step when setting up a keyboard macro to move email messages based on a pattern would be to use Mutt's tagging-by-pattern feature (by default, the T key) to tag all the messages in a folder that match a certain pattern. For instance, if all of your cron emails have "Cron Daemon" in the subject line, you would type the following key sequence to tag all of those messages:

TCron Daemon

That's the uppercase T, followed by the pattern I want to match in the subject line (Cron Daemon) and then the Enter key. If I type that while I'm in my Mutt index window that shows me all the emails in my Inbox, it will tag all of the messages that match that pattern, but it won't do anything with them yet. To act on all of those messages, I press the ; key (by default), followed by the action I want to perform. So to save all of the tagged email to my "cron" folder, I would type:

Go to Full Article
Kyle Rankin

Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)

1 month 3 weeks ago
by Petros Koutoupis

Part I of this Deep Dive on containers introduces the idea of kernel control groups, or cgroups, and the way you can isolate, limit and monitor selected userspace applications. Here, I dive a bit deeper and focus on the next step of process isolation—that is, through containers, and more specifically, the Linux Containers (LXC) framework.

Containers are about as close to bare metal as you can get when running virtual machines. They impose very little to no overhead when hosting virtual instances. First introduced in 2008, LXC adopted much of its functionality from the Solaris Containers (or Solaris Zones) and FreeBSD jails that preceded it. Instead of creating a full-fledged virtual machine, LXC enables a virtual environment with its own process and network space. Using namespaces to enforce process isolation and leveraging the kernel's very own control groups (cgroups) functionality, the feature limits, accounts for and isolates CPU, memory, disk I/O and network usage of one or more processes. Think of this userspace framework as a very advanced form of chroot.

Note: LXC uses namespaces to enforce process isolation, alongside the kernel's very own cgroups to account for and limit CPU, memory, disk I/O and network usage across one or more processes.

But what exactly are containers? The short answer is that containers decouple software applications from the operating system, giving users a clean and minimal Linux environment while running everything else in one or more isolated "containers". The purpose of a container is to launch a limited set of applications or services (often referred to as microservices) and have them run within a self-contained sandboxed environment.

Note: the purpose of a container is to launch a limited set of applications or services and have them run within a self-contained sandboxed environment.

Figure 1. A Comparison of Applications Running in a Traditional Environment to Containers

This isolation prevents processes running within a given container from monitoring or affecting processes running in another container. Also, these containerized services do not influence or disturb the host machine. The idea of being able to consolidate many services scattered across multiple physical servers into one is one of the many reasons data centers have chosen to adopt the technology.

Container features include the following:

Go to Full Article
Petros Koutoupis

New Raspberry Pi PoE HAT, UBports Foundation Releases Ubuntu Touch OTA-4, OpenSSH 7.8 Now Available, KDE Enhancements and Seagate Media Server SQL Injection Vulnerabilities,

1 month 3 weeks ago

News briefs for August 27, 2018.

Raspberry Pi Trading is offering a Power-over-Ethernet HAT board for the RPi 3 Model B+ for $20 that ships with a small fan. Linux Gizmos notes that the "802.3af-compliant 'Raspberry Pi PoE HAT' allows delivery of up to 15W over the RPi 3 B+'s USB-based GbE port without reducing the port's up to 300Mbps bandwidth." To purchase, visit here.

UBports Foundation has released Ubuntu Touch OTA-4. This release features Ubuntu 16.04 and includes many security fixes and stability improvements. UBports notes that "We believe that this is the 'official' starting point of the UBports project. From the point when Canonical dropped the project until today, the community has been playing 'catch up' in development, infrastructure, and community building. This release shows that the community is soundly based and capable of delivering."

OpenSSH 7.8 was released August 24, 2018, and is available from its mirrors at https://www.openssh.com.

KDE developers continue to enhance KDE. According to Phoronix, the latest usability and productivity improvements include a new Plasmoid that brings easy access to the screen layout switcher, the logout screen will now warn you when other users are still logged in, new thumbnails for AppImages and more.

Several SQL injection vulnerabilities were discovered in the Seagate Media Server. Evidently the public folder facility "can be abused by malicious attackers when they upload troublesome files and media to the folder in the cloud". See the Appuals post for more details about this exploit.

News Raspberry Pi Ubuntu Touch UBports OpenSSH KDE Plasma
Jill Franklin

Intel Reworks Microcode Security Fix License after Backlash, Intel's FSP Binaries Also Re-licensed, Valve Releases Beta of Steam Play for Linux, Chromebooks Running Linux 3.4 or Older Won't Get Linux App Support and Windows 95 Now an App

1 month 3 weeks ago

News briefs for August 24, 2018.

Intel has now reworked the license for its microcode security fix after outcry from the community. The Register quotes Imad Sousou, corporate VP and general manager of Intel Open Source Technology Center, "We have simplified the Intel license to make it easier to distribute CPU microcode updates and posted the new version here. As an active member of the open source community, we continue to welcome all feedback and thank the community."

Intel also has re-licensed its FSP binaries, which are used by Coreboot, LinuxBoot and Facebook's Open Compute Project, so that they are under the same license as the CPU microcode files. According to the Phoronix post, "The short and unofficial summary of that license text is it allows for redistribution (and benchmarking, if so desired) of the binaries and the restricts essentially come down to no reverse-engineering/disassembly of the binaries and respecting the copyright."

Valve announced this week that it's releasing the Beta of a new and improved Steam Play version to Linux. The new version includes "a modified distribution of Wine, called Proton, to provide compatibility with Windows game titles." Other improvements include DirectX 11 and 12 implementations are now based on Vulkan, full-screen support has been improved, game controller support has been improved, and "Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support".

Linux app support will be available soon for many Chromebooks, but a post on the Chromium Gerrit indicates that devices running Linux 3.14 or older will not be included. See this beta news article for a full list of the Chromebooks that won't be able to run Linux apps.

Windows 95 is now an app you can run on Linux, macOS and Windows thanks to Slack developer Felix Rieseberg who created the electron app. See The Verge for more details. The source code and app installers are available on GitHub.

News Intel licensing Valve gaming Chromebooks Windows
Jill Franklin

Organizing a Market for Applications

1 month 3 weeks ago
by Sriram Ramkrishna

The "Year of the Desktop" has been a perennial call to arms that's sunken into a joke that's way past its expiration date. We frequently talk about the "Year of the Desktop", but we don't really talk about how we would achieve that goal. What does the "Year of the Desktop" even look like?

What it comes down to is applications—rather, a market for applications. There is no market for applications because of a number of cultural artifacts that began when the Free Software was just getting up on wobbly legs.

Today, what we have is a distribution-centric model. Software is distributed by an OSV (operating system vendor), and users get their software directly from there via whatever packaging mechanism that OSV supports. This model evolved, because in the early-to-mid 1990s, those OSVs existed to compile the kernel and userspace into a cohesive product. Packaging of applications was the next step as a convenience factor to save users from having to compile their own applications, which always was a hit-or-miss endeavor as developers had different development environment from the users. Ultimately, OSVs enjoyed being gatekeepers as part of keeping developers honest and fixing issues that were unique to their operating system. OSVs saw themselves as agents representing users to provide high-quality software, and there was a feeling that developers were not to be trusted, as of course, nobody knows the state of their operating system better than they would.

However, this model represented a number of challenges to both commercial and open-source developers. For commercial developers, the problem became how to maximize their audience as the "Linux" market consisted of a number of major OSVs and an uncountable number of smaller niche distributions. Commercial application developers would have to develop multiple versions of their own application targeted at various major distributions for fear of missing out on a subset of users. Over time, commercial application developers would settle on using Ubuntu or a compressed tar file hosted on their website. Various distributions would pick up these tar balls and re-package them for their users. If you were an open-source developer, you had the side benefit of distributions picking up your work automatically for you and packaging them if you successfully enjoyed a large following. But they faced the same dilemma.

Go to Full Article
Sriram Ramkrishna

Debian Withholding Intel Security Patches, Linus Torvalds on the XArray Pull Request, Red Hat Transitioning Its Container Registry, Akraino Edge Stack Moves to Execution Phase, openSUSE Tumbleweed Snapshots Released and digiKam 6.0.0 Beta 1 Now Available

1 month 3 weeks ago

News briefs for August 23, 2018.

Debian is withholding security patches for the latest Intel CPU design flaw due to licensing issues. The Register reports that the end-user license file Intel added to the archive "prohibits, among other things, users from using any portion of the software without agreeing to be legally bound by the terms of the license", and Debian is not having it. See also Bruce Perens' blog post on this issue.

Linus Torvalds ranted about the XArray pull request this week on the LKML saying, "For some unfathomable reason, you have based it on the libnvdimm tree. I don't understand at all why you did that. That libnvdimm tree didn't get merged, because it had complete garbage in the mm/ code. And yes, that buggy shit was what you based the radix tree code on. I seriously have no idea why you have based it on some unstable random tree in the first place."

Red Hat is transitioning its customers and product portfolio to a new container registry for Red Hat container images at registry.redhat.io. Red Hat notes that as it makes this transition, "the goal is to have a uniform experience for all of our registries that uses industry standard Open Authorization (OAuth)."

The Linux Foundation announced that its Akraino Edge Stack, "designed to improve the state of edge cloud infrastructure for enterprise edge, OTT edge, and carrier edge networks", is moving from formation to execution. The Akraino Edge Stack seed code will be released to the community this week at the Akraino Edge Stack Developer Summit.

Two openSUSE Tumbleweed snapshots were released this week. Changes include a move to kernel 4.18.0, KVM improvements, Mozilla Firefox 61.0.2 and many more fixes and updates.

digiKam 6.0.0 beta 1 was released recently. The next major version will include "full support of video files management working as photos"; "new tools to export to Pinterest, OneDrive and Box web-services"; "an integration of all import/export web-service tools in LightTable, Image editor and Showfoto"; and many more improvements.

News kernel Linus Torvalds Debian Intel Red Hat Containers The Linux Foundation Akraino Edge Stack digiKam openSUSE Distributions
Jill Franklin

Copy and Paste in Screen

1 month 3 weeks ago
by Kyle Rankin

Put the mouse down, and copy and paste inside a terminal with your keyboard using Screen.

Screen is a command-line tool that lets you set up multiple terminal windows within it, detach them and reattach them later, all without any graphical interface. This program has existed since before I started using Linux, but first I clearly need to address the fact that I'm even using Screen at all prior to writing a tech tip about it. I can already hear you ask, "Why not tmux?" Well, because every time someone tries to convince me to make the switch, it's usually for one of the following reasons:

  • Screen isn't getting updates: I've been happy with the current Screen feature set for more than a decade, so as long as distributions continue to package it, I don't feel like I need any newer version.
  • tmux key bindings are so much simpler: I climbed the Screen learning curve more than a decade ago, so to me, the Screen key bindings are second nature.
  • But you can do vertical and horizontal splits in tmux: you can do them in Screen too, and since I climbed that learning curve ages ago, navigating splits are part of my muscle memory just like inside vim.

So now that those arguments are out of the way, I thought those of you still using Screen might find it useful to learn how to do copy and paste within Screen itself. Although it's true that you typically can use your mouse to highlight text and paste it, if you are a fan of staying on the home row like I am, you realize how much faster and more efficient it is if you can copy and paste from within Screen itself using the keyboard. In fact, I found that once I learned this method, I ended up using it multiple times every day.

Enter Copy Mode

The first step is to enter copy mode from within Screen. Press Ctrl-a-[ to enter copy mode. Once you're in this mode, you can use arrow keys or vi-style keybindings to navigate up and down your terminal window. This is handy if you are viewing a log or other data that has scrolled off the screen and you want to see it. Typically people who are familiar with copy mode just use it for scrolling and then press q to exit that mode, but once you are in copy mode, you also can move the cursor to an area you want to copy.

Copy Text

To copy text once in copy mode, move your cursor to where you want to start to copy and then press the space bar. This will start the text selection, and you'll see the cursor change so that it highlights the text as you then move the cursor to select everything you want to copy. Once you are done selecting text, press the space bar again, and it will be copied to Screen's copy buffer. Once text is copied to Screen's clipboard, it automatically will exit copy mode.

Go to Full Article
Kyle Rankin

Mozilla Announces Major Improvements to Its Hubs Social Mixed Reality Platform, Windmill Enterprise Joins The Linux Foundation, Cloud Foundry Survey Results, New Bodhi Linux Major Release and Red Hat Linux 7.6 Now Available

1 month 3 weeks ago

News briefs for August 22, 2018.

Mozilla announced major improvements in its open-source Hubs, "an experiment to bring Social Mixed Reality to the browser". You now are able to bring videos, images, documents and 3D models into Hubs just by pasting in a link. You can join a room in Hubs and get together with others in Mixed Reality using any VR device or your phone or PC. In addition, any content you upload is available only to others in the room and is encrypted and removed when no longer used. The code for Hubs is available on GitHub.

Windmill Enterprise announces it has joined The Linux Foundation to collaborate on EdgeX Foundry and LF Networking (LF). As part of its work with these projects, "Windmill will incorporate open source, blockchain solutions that enable broader adoption of industrial IoT frameworks into the enterprise. In addition, Windmill will contribute enterprise-class mobile networking security solutions to the largest global open source innovation community." Windmill is also working with the FreeIPA project for identity management. You can learn more here.

According to a recent Cloud Foundry Foundation (CFF) survey, Java and JavaScript are the top enterprise languages. See ZDNet for more information on the survey results.

The Bodhi Team announced a new major release this morning, version 5.0.0. The announcement notes that the new version doesn't include a ton of changes, but instead "simply serves to bring a modern look and updated Ubuntu core (18.04) to the lightning fast desktop you have come to expect from Bodhi Linux."

Red Hat Linux 7.6 beta is now available. According to the Red Hat blog, "Red Hat Enterprise Linux 7.6 beta adds new and enhanced capabilities emphasizing innovations in security and compliance features, management and automation, and Linux containers." See the Release Notes for more information.

News Mozilla VR The Linux Foundation IOT Blockchain Cloud Java JavaScript Programming Distributions Bodhi Red Hat Containers
Jill Franklin

New Intel Caching Feature Considered for Mainline

1 month 3 weeks ago
by Zack Brown

These days, Intel's name is Mud in various circles because of the Spectre/Meltdown CPU flaws and other similar hardware issues that seem to be emerging as well. But, there was a recent discussion between some Intel folks and the kernel folks that was not related to those things. Some thrust-and-parry still was going on between kernel person and company person, but it seemed more to do with trying to get past marketing speak, than at wrestling over what Intel is doing to fix its longstanding hardware flaws.

Reinette Chatre of Intel posted a patch for a new chip feature called Cache Allocation Technology (CAT), which "enables a user to specify the amount of cache space into which an application can fill". Among other things, Reinette offered the disclaimer, "The cache pseudo-locking approach relies on generation-specific behavior of processors. It may provide benefits on certain processor generations, but is not guaranteed to be supported in the future."

Thomas Gleixner thought Intel's work looked very interesting and in general very useful, but he asked, "are you saying that the CAT mechanism might change radically in the future [that is, in future CPU chip designs] so that access to cached data in an allocated area which does not belong to the current executing context wont work anymore?"

Reinette replied, "Cache Pseudo-Locking is a model-specific feature so there may be some variation in if, or to what extent, current and future devices can support Cache Pseudo-Locking. CAT remains architectural."

Thomas replied, "that does NOT answer my question at all."

At this point, Gavin Hindman of Intel joined the discussion, saying:

Support in a current generation of a product line doesn't imply support in a future generation. Certainly we'll make every effort to carry support forward, and would adjust to any changes in CAT support, but we can't account for unforeseen future architectural changes that might block pseudo-locking use-cases on top of CAT.

And Thomas replied, "that's the real problem. We add something that gives us some form of isolation, but we don't know whether next generation CPUs will work. From a maintainability and usefulness POV that's not a really great prospect."

Elsewhere in a parallel part of the discussion, Thomas asked, "Are there real world use cases that actually can benefit from this [CAT feature] and what are those applications supposed to do once the feature breaks with future generations of processors?"

Reinette replied, "This feature is model-specific with a few platforms supporting it at this time. Only platforms known to support Cache Pseudo-Locking will expose its resctrl interface."

To which Thomas said, "you deliberately avoided to answer my question again."

Go to Full Article
Zack Brown

Haiku Release R1/beta1, Flatpack v. 1.0.0, SUSE Updates Their Kernel to Boost Performance on Azure, Debian Receives Mitigation Updates to Vulnerability

1 month 4 weeks ago

Any old school BeOS fans in the audience? If so, the Haiku development team just announced the upcoming release R1/beta1.

Flatpak, the software utility for package deployment in a sandbox environment just cut release version 1.0.0. It comes with performance and stability improvements.

SUSE has had a long history with Microsoft, and it would seem that their relationship with the software giant continues with the Linux distribution's updates to their kernel to boost performance on Azure.

In more L1TF related news, the Debian GNU/Linux 9 (Stretch) distribution just received mitigation updates to this recent and high profile vulnerability.

News
Petros Koutoupis

Freespire 4.0, Mozilla Announces New Fellows, Flatpak 1.0, KDevelop 5.2.4 and Net Neutrality Update

1 month 4 weeks ago

News briefs for August 21, 2018.

Freespire 4.0 has been released. This release brings a migration of the Ubuntu 16.04 LTS codebase to the 18.04 LTS codebase, which adds many usability improvements and more hardware support. Other updates include intuitive dark mode, "night light", Geary 0.12, Chromium browser 68 and much more.

Mozilla announced its 2018–2019 Fellows in openness, science and tech policy today. These fellows "will spend the next 10 to 12 months creating a more secure, inclusive, and decentralized internet". In the past, Mozilla fellows "built secure platforms for LGBTQ individuals in the Middle East; leveraged open-source data and tools to bolster biomedical research across the African continent; and raised awareness about invasive online tracking." See the Mozilla blog for more information and the list of Fellows.

Flatpak 1.0 has been released, marking the first version in a new stable series. Distributions should update as soon as possible. See the GitHub for all the fixes and new features, which include faster installation and updates, a new portal, applications can now be marked as end-of-life, and much more. See also the Flatpak documentation for more information.

KDevelop released version 5.2.4 today. This release contains a few bug fixes and "should be a very simple transition for anyone using 5.2.x currently". You can download it from here.

Reuters reports that 22 states are asking the US appeals court to reinstate Net Neutrality rules. In addition, several internet companies, media and technology advocacy groups filed a separate challenge yesterday to overturn the FCC ruling.

News Freespire Mozilla KDE Net Neutrality KDevelop Flatpak
Jill Franklin

Everything You Need to Know about Linux Containers, Part I: Linux Control Groups and Process Isolation

1 month 4 weeks ago
by Petros Koutoupis

Everyone's heard the term, but what exactly are containers?

The software enabling this technology comes in many forms, with Docker as the most popular. The recent rise in popularity of container technology within the data center is a direct result of its portability and ability to isolate working environments, thus limiting its impact and overall footprint to the underlying computing system. To understand the technology completely, you first need to understand the many pieces that make it all possible.

Sidenote: people often ask about the difference between containers and virtual machines. Both have a specific purpose and place with very little overlap, and one doesn't obsolete the other. A container is meant to be a lightweight environment that you spin up to host one to a few isolated applications at bare-metal performance. You should opt for virtual machines when you want to host an entire operating system or ecosystem or maybe to run applications incompatible with the underlying environment.

Linux Control Groups

Truth be told, certain software applications in the wild may need to be controlled or limited—at least for the sake of stability and, to some degree, security. Far too often, a bug or just bad code can disrupt an entire machine and potentially cripple an entire ecosystem. Fortunately, a way exists to keep those same applications in check. Control groups (cgroups) is a kernel feature that limits, accounts for and isolates the CPU, memory, disk I/O and network's usage of one or more processes.

Originally developed by Google, the cgroups technology eventually would find its way to the Linux kernel mainline in version 2.6.24 (January 2008). A redesign of this technology—that is, the addition of kernfs (to split some of the sysfs logic)—would be merged into both the 3.15 and 3.16 kernels.

The primary design goal for cgroups was to provide a unified interface to manage processes or whole operating-system-level virtualization, including Linux Containers, or LXC (a topic I plan to revisit in more detail in a follow-up article). The cgroups framework provides the following:

  • Resource limiting: a group can be configured not to exceed a specified memory limit or use more than the desired amount of processors or be limited to specific peripheral devices.
  • Prioritization: one or more groups may be configured to utilize fewer or more CPUs or disk I/O throughput.
  • Accounting: a group's resource usage is monitored and measured.
  • Control: groups of processes can be frozen or stopped and restarted.

A cgroup can consist of one or more processes that are all bound to the same set of limits. These groups also can be hierarchical, which means that a subgroup inherits the limits administered to its parent group.

Go to Full Article
Petros Koutoupis

Trinity Desktop Environment New Release, New Read-Only File System Designed for Android Devices, CloudNative Conference Coming Up, Retro Arcade Games Coming to Polycade

1 month 4 weeks ago

Trinity Desktop Environment (TDE) development team just announced the release of TDE R14.0.5. The TDE project began as a continuation of the K Desktop Environment (KDE) version 3.

The Huawei developed EROFS is finding its way into the Linux 4.19 staging tree. EROFS is a read-only file system designed for Android devices.

Mark your calendars for September 12-13: the CloudNative, Docker, and K8s Summit will be hosted in Dallas, Texas this year. To learn more, visit the official conference website.

Tyler Bushnell, the son of Atari co-founder, Nolan Bushnell, is working to bring back retro arcade games with Polycade. Polycade is an arcade machine that is smaller than a cabinet and can hang on a wall.

News
Petros Koutoupis

New Version of KStars, Google Launches Edge TPU and Cloud IoT Edge, Lower Saxony to Migrate from Linux to Windows, GCC 8.2 Now Available and VMware Announces VMworld 2018

2 months 3 weeks ago

News briefs for July 26, 2018.

A new version of KStars—the free, open-source, cross-platform astronomy software—was released today. Version 2.9.7 includes new features, such as improvements to the polar alignment assistant and support for Ekos Live, as well as stability fixes. See the release notes for all the changes.

Google yesterday announced two new products: Edge TPU, a new "ASIC chip designed to run TensorFlow Lite ML models at the edge", and Cloud IoT Edge, which is "a software stack that extends Google Cloud's powerful AI capability to gateways and connected devices". Google states that "By running on-device machine learning models, Cloud IoT Edge with Edge TPU provides significantly faster predictions for critical IoT applications than general-purpose IoT gateways—all while ensuring data privacy and confidentiality."

The state of Lower Saxony in Germany is set to migrate away from Linux and back to Windows, following Munich's similar decision, ZDNet reports. The state currently has 13,000 workstations running openSUSE that it plans to migrate to "a current version of Windows" because "many of its field workers and telephone support services already use Windows, so standardisation makes sense". It's unclear how many years this migration will take.

GCC 8.2 was released today. This release is a bug-fix release and contains "important fixes for regressions and serious bugs in GCC 8.1 with more than 99 bugs fixed since the previous release", according to Jakub Jelinek's release statement. You can download GCC 8.2 here.

VMware announces VMworld 2018, which will be held August 26–30 in Las Vegas. The theme for the conference is "Possible Begins with You", and the event will feature keynotes by industry leaders, user-driven panels, certification training and labs. Topics will include "Data Center and Cloud, Networking and Security, Digital Workspace, Leading Digital Transformation, and Next-Gen Trends including the Internet of Things, Network Functions Virtualization and DevOps". For more information and to register, go here.

News KDE Astronomy Science Google IOT Cloud Windows openSUSE GCC VMware
Jill Franklin

Progress with Your Image

2 months 3 weeks ago
by Kyle Rankin

Learn a few different ways to get a progress bar for your dd command.

The dd tool has been a critical component on the Linux (and UNIX) command line for ages. You know a command-line tool is important if it has only two letters, and dd is no exception. What I love about it in particular is that it truly embodies the sense of a powerful tool with no safety features, as described in Neal Stephenson's In the Beginning was the Command Line. The dd command does something simple: it takes input from one file and outputs it to another file, and since in UNIX "everything is a file", that means dd doesn't care if the output file is another file on your disk, a partition or even your active hard drive, it happily will overwrite it! Because of this, dd fits in that immortal category of sysadmin tools that I type out and then pause for five to ten seconds, examining the command, before I press Enter.

Unfortunately, dd has fallen out of favor lately, and some distributions even will advise using tools like cp or a graphical tool to image drives. This is largely out of the concern that dd doesn't wait for the disk to sync before it exits, so even if it thinks it's done writing, that doesn't mean all of the data is on the output file, particularly if it's over slow I/O like in the case of USB flash storage. The other reason people have tended to use other imaging tools is that traditionally dd doesn't output any progress. You type the command, and then if the image is large, you just wait, wait and then wait some more, wondering if dd will ever complete.

But, it turns out that there are quite a few different ways to get progress output from dd, so I cover a few popular ones here, all based on the following dd command to image an ISO file to a disk:

$ sudo dd if=/some/file.iso of=/dev/sdX bs=1M Option 1: Use pv

Like many command-line tools, dd can accept input from a pipe and output to a pipe. This means if you had a tool that could measure the data flowing over a pipe, you could sandwich it in between two different dd commands and get live progress output. The pv (pipe viewer) command-line tool is just such a tool, so one approach is to install pv using your distribution's packaging tool and then create a pv and dd sandwich:

Go to Full Article
Kyle Rankin