Linux Journal

Debian 10 "Buster" Coming Tomorrow, GRUB 2.04 Released, PineBook Pro Laptop Available for Pre-Order Soon, Raspberry Pi Sticker Give-Away and IPFire 2.23 Core Update 134 to Fix Security Issue

1 week 6 days ago

News briefs for July 5, 2019.

Debian 10 "Buster" is coming tomorrow. You can follow the live coverage of the release here or @debian on Twitter. You also can join a release party or celebrate online at the Debian Party Line.

GRUB 2.04 has been released. According to Phoronix, this version, which has been two years in the making, includes RISC-V architecture support, native UEFI Secure Boot support, support for the F2FS filesystem and more. You can download it from GNU Savannah.

The PineBook Pro laptop will be available for pre-order July 25, 2019. OMG! Ubuntu! reports that the $199 PineBook Pro will now include privacy switches to disable the internal Bluetooth and WiFi module, the webcam and the microphone at the hardware level. Go to Pine64.org for specs and more details.

RaspberryPi.org is giving away stickers. All you need to do is leave a comment on their site or tweet them @Raspberry_Pi, with the hashtag #GimmeRaspberryPiStickers by midnight (BST) Monday, July 8th. They have ten packs to give away and winners will be chosen at random.

IPFire 2.23 Core Update 134 was released this week. This release contains security fixes in the kernel for the "SACK Panic" attack and some other smaller fixes. SACK Panic refers to CVE-2019-11477 and CVE-2019-11478, which are DoS attacks against the kernel's TCP stack. The IPFire blog post notes that "The first one made it possible for a remote attacker to panic the kernel and a second one could trick the system into transmitting very small packets so that a data transfer would have used the whole bandwidth but filled mainly with packet overhead. The IPFire kernel is now based on Linux 4.14.129, which fixes this vulnerability and fixes various other bugs." Go here to download.

News Debian GRUB PineBook Pro Pine64 Laptops Raspberry Pi IPFire Security
Jill Franklin

Lessons in Vendor Lock-in: Google and Huawei

1 week 6 days ago
by Kyle Rankin

What happens when you're locked in to a vendor that's too big to fail, but is on the opposite end of a trade war?

The story of Google no longer giving Huawei access to Android updates is still developing, so by the time you read this, the situation may have changed. At the moment, Google has granted Huawei a 90-day window whereby it will have access to Android OS updates, the Google Play store and other Google-owned Android assets. After that point, due to trade negotiations between the US and China, Huawei no longer will have that access.

Whether or not this new policy between Google and Huawei is still in place when this article is published, this article isn't about trade policy or politics. Instead, I'm going to examine this as a new lesson in vendor lock-in that I don't think many have considered before: what happens when the vendor you rely on is forced by its government to stop you from being a customer?

Too Big to Fail

Vendor lock-in isn't new, but until the last decade or so, it generally was thought of by engineers as a bad thing. Companies would take advantage the fact that you used one of their products that was legitimately good to use the rest of their products that may or may not be as good as those from their competitors. People felt the pain of being stuck with inferior products and rebelled.

These days, a lot of engineers have entered the industry in a world where the new giants of lock-in are still growing and have only flexed their lock-in powers a bit. Many engineers shrug off worries about choosing a solution that requires you to use only products from one vendor, in particular if that vendor is a large enough company. There is an assumption that those companies are too big ever to fail, so why would it matter that you rely on them (as many companies in the cloud do) for every aspect of their technology stack?

Many people who justify lock-in with companies who are too big to fail point to all of the even more important companies who use that vendor who would have even bigger problems should that vendor have a major bug, outage or go out of business. It would take so much effort to use cross-platform technologies, the thinking goes, when the risk of going all-in with a single vendor seems so small.

Huawei also probably figured (rightly) that Google and Android were too big to fail. Why worry about the risks of being beholden to a single vendor for your OS when that vendor was used by other large companies and would have even bigger problems if the vendor went away?

Go to Full Article
Kyle Rankin

Finishing Up the Bash Mail Merge Script

2 weeks ago
by Dave Taylor

Finally, I'm going to finish the mail merge script, just in time for Replicant Day.

Remember the mail merge script I started writing a while back? Yeah, that was quite some time ago. I got sidetracked with the Linux Journal Anniversary special issue (see my article "Back in the Day: UNIX, Minix and Linux"), and then I spun off on a completely different tangent for my last article ("Breaking Up Apache Log Files for Analysis"). I blame it on...

SQUIRREL!

Oh, sorry, back to topic here. I was developing a shell script that would let you specify a text document with embedded field names that could be substituted iteratively across a file containing lots of field values.

Each field was denoted by #fieldname#, and I identified two categories of fieldnames: fixed and dynamic. A fixed value might be #name#, which would come directly out of the data file, while a dynamic value could be #date#, which would be the current date.

More interesting, I also proposed calculated values, specifically #suggested#, which would be a value calculated based on #donation#, and #date#, which would be replaced by the current date. The super-fancy version would have a simple language where you could define the relationship between variables, but let's get real. Mail merge. It's just mail merge.

Reading and Assigning Values

It turns out that the additions needed for this script aren't too difficult. The basic data file has comma-separated field names, then subsequent lines have the values associated with those fields.

Here's that core code:

if [ $lines -eq 1 ] ; then # field names # grab variable names declare -a varname=($f1 $f2 $f3 $f4 $f5 $f6 $f7) else # process fields # grab values for this line (can contain spaces) declare -a value=("$f1" "$f2" "$f3" "$f4" "$f5" "$f6" "$f7")

The declare function turns out to be ideal for this, allowing you to create an array varname based on the contents of the first line, then keep replacing the values of the array value, so that varname[1] = value[1], and so on.

To add the additional variables #date# and #suggested#, you simply can append them to the varname and value arrays. The first one is easy, but it did highlight a weakness in the original code that I had to fix by adding quotes as shown:

Go to Full Article
Dave Taylor

Linux Mint Announces MintBox 3, NVIDIA Open-Sources Its TensorRT Library, Ubuntu's Wallpaper Competition for Eoan Ermine Open for Submissions, Google Released Its Android Security Patch for July and Whonix 15 Now Available

2 weeks 1 day ago

News briefs for July 3, 2019.

The Linux Mint folks yesterday announced that they're working with Compulab again on the next MintBox mini, the most powerful MintBox ever. MintBox 3 will be based on Airtop 3. The release date has yet to be announced. The unfinalized specs are listed as: "1. Basic configuration: $1543 with a Core i5 (6 cores), 16 GB RAM, 256 GB EVO 970, Wi-Fi and FM-AT3 FACE Module. 2. High end: $2698 with Core i9, GTX 1660 Ti, 32 GB RAM, 1TB EVO 970, WiFi and FM-AT3 FACE Module."

NVIDIA yesterday announced it has open-sourced its TensorRT Library and associated plugins. From Phoronix: "Included via NVIDIA/TensorRT on GitHub are indeed sources to this C++ library though limited to the plug-ins and Caffe/ONNX parsers and sample code. Building the open-source TensorRT code still depends upon the proprietary CUDA as well as other common build dependencies. But nice at least seeing the TensorRT code more open now than previously."

Ubuntu announces its wallpaper competition for Eoan Ermine is now open for submissions. To enter, post your image to the thread here. The competition will close in early September. Go here for more information.

Google released its Android Security Patch for July 2019 this week for all supported Pixel devices. Softpedia News reports that the patch "address a total of 33 security vulnerabilities affecting Android devices, which were discovered in the Android system, framework, library, media framework, as well as Qualcomm components, including closed-source ones. The most critical flaw was discovered in Android's Media framework." See the Security Bulletin for details.

Whonix 15 has been released. This new version of the desktop OS designed for advanced security and privacy is based on Debian Buster and includes many major changes and new features. See the ChangeLog for details.

News Linux Mint MintBox NVIDIA Ubuntu Google Android Security Whonix
Jill Franklin

In the End Is the Command Line

2 weeks 1 day ago
by Doc Searls

Times have changed every character but one in Neal Stephenson's classic. That one is Linux.

I was wandering through Kepler's, the legendary bookstore, sometime late in 1999, when I spotted a thin volume with a hard-to-read title on the new book table. In the Beginning...Was the Command Line, the cover said.

The command line was new to me when I started writing for Linux Journal in 1996. I hadn't come from UNIX or from programming. My tech background was in ham radio and broadcast engineering, and nearly all my hacking was on RF hardware. It wasn't a joke when I said the only code I knew was Morse. But I was amazed at how useful and necessary the command line was, and I was thrilled to see Neal Stephenson was the author of that book. (Pro tip: you can tell the commercial worth of an author by the size of his or her name on the cover. If it's bigger than the title of the book, the writer's a big deal. Literally.)

So I bought it, and then I read it in one sitting. You can do the same. In fact, I command that you do, if you haven't already, because (IMHO) it's the most classic book ever written about both the command line and Linux itself—a two-fer of the first order.

And I say this in full knowledge (having re-read the whole thing many times, which is easy, because it's short) that much of what it brings up and dwells on is stale in the extreme. The MacOS and the Be operating systems are long gone (and the Be computer was kind of dead on arrival), along with the Windows of that time. Today Apple's OS X is BSD at its core, while Microsoft produces lots of open-source code and contributes mightily to The Linux Foundation. Some of Neal's observations and complaints about computing and the culture of the time also have faded in relevance, although some remain enduringly right-on. (If you want to read a section-by-section critique of the thing, Garrett Birkel produced one in the mid-2000s with Neal's permission. But do read the book first.)

What's great about Command Line is how well it explains the original virtues of UNIX, and of Linux as the operating system making the most of it:

The file systems of Unix machines all have the same general structure. On your flimsy operating systems, you can create directories (folders) and give them names like Frodo or My Stuff and put them pretty much anywhere you like. But under Unix the highest level—the root—of the filesystem is always designated with the single character "/" and it always contains the same set of top-level directories:

Go to Full Article
Doc Searls

Mageia 7 Now Available, NVIDIA Announces New "SUPER" Line with Revised Graphics Cards, Humble Store's DRM-Freedom Sale, Ubuntu MATE Coming to Raspberry Pi 4 and Backbox Linux Releases Version 6.0

2 weeks 2 days ago

News briefs for July 2, 2019.

Mageia 7 was released yesterday. This new version has "lots of new features, exciting updates, and new versions of your favorite programs, as well as support for very recent hardware." In addition, the Mageia team made an effort to enhance gaming in Mageia, so the game collection features many upgrades and additions. See the Release Notes and the full documentation for more information and installation options.

NVIDIA today announced its new "SUPER" line with revised RTX 2060 (starting at $399), RTX 2070 (starting at $499) and RTX 2080 (starting at $699) graphics cards, available later this month. See Phoronix for more details on the new GeForce RTX SUPER GPUs.

The Humble Store is having a DRM-Freedom sale. GamingOnLinux reports that all the included games have DRM-free builds available. Deals include Rogue Legacy at 80% off, Prison Architect at 75% off, Lovers in a Dangerous Spacetime at 60% off and much more.

Ubuntu MATE is coming to the Raspberry Pi 4. According to Forbes, lead developer Martin Wimpress revealed he is working on bringing MATE to the new RPi when he tweeted a photo of it with the caption "This should keep me occuPIed 4 a while". The Forbes post also notes that "While Raspbian 10 (based on Debian Buster) is a solid choice, Ubuntu MATE feels like a more modern desktop experience with less of a learning curve. It also includes office software, media players, the excellent Shotwell photo utility and the basic software most people need for general PC use."

Backbox Linux releases an update to version 6.0. Rogue Media News reports that the update of the ethical hacking and pen-testing distro includes a "new kernel, updated tools and some structural changes with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS". Go here to download the latest version.

News Mageia NVIDIA gaming Humble Store Ubuntu MATE Raspberry Pi Backbox Linux
Jill Franklin

Online Censorship Is Coming--Here's How to Stop It

2 weeks 2 days ago
by Glyn Moody

EU's upload filters are coming. Why and how the Open Source world must fight them.

A year ago, I warned about some terrible copyright legislation being drawn up in the EU that would have major adverse effects on the Open Source world. Its most problematic provision would force many for-profit sites operating in the EU to use algorithmic filters to block the upload of unauthorized material by users. As a result of an unprecedented campaign of misinformation, smears and outright lies, supporters managed to convince/trick enough Members of the European Parliament (MEPs) to vote in favour of the the new Copyright Directive, including the deeply flawed upload filters.

A number of changes were made from the original proposals that I discussed last year. Most important, "open source software development and sharing platforms" are explicitly excluded from the scope of the requirement to filter uploads. However, it would be naïve to assume that the Copyright Directive is now acceptable, and that free software will be unaffected.

Open source and the open internet have a symbiotic relationship—each has fed constantly into the other. The upload filters are a direct attack on the open internet, turning it into a permissioned online space. They will create a censorship system that past experience shows is bound to be abused by companies and governments alike to block legitimate material. It would be a mistake of the highest order for the Open Source community to shrug its shoulders and say: "we're okay—not our problem." The upload filters are most definitely the problem of everyone who cares about the open and healthy internet, and about freedom of speech. For example, the GitHub blog points out that false positives are likely to be a problem when upload filters are implemented—regardless of nominal "exemptions" for open source: "When a filter catches a false positive and dependencies disappear, this not only breaks projects—it cuts into software developers' rights as copyright holders too."

So, what can be done?

As the Pirate MEP Julia Reda emphasises in her post summarizing the multi-year battle to improve the text of the Copyright Directive: "My message to all who took part in this movement: Be proud of how far we came together! We've proven that organised citizens can make an impact—even if we didn't manage to kill the whole bill in the end. So don't despair!" Specifically:

Go to Full Article
Glyn Moody

The Command-Line Issue

2 weeks 3 days ago
by Bryan Lunduke

Summer. 1980-something. An elementary-school-attending, Knight Rider-T-Shirt-wearing version of myself slowly rolls out of bed and shuffles to the living room. There, nestled between an imposingly large potted plant and an over-stocked knick-knack shelf, rested a beautifully gray, metallic case powered by an Intel 80286 processor—with a glorious, 16-color EGA monitor resting atop.

This was to be my primary resting place for the remainder of the day: in front of the family computer.

That PC had no graphical user interface to speak of—no X Window System, no Microsoft Windows, no Macintosh Finder. There was just a simple command line—in this case, MS-DOS. (This was long before Linux became a thing.) Every task I wished to perform—executing a game, moving files—required me to type the commands in via a satisfyingly loud, clicky keyboard. No, "required" isn't the right word here. Using the computer was a joy. "Allowed" is the right word. I was allowed to enjoy typing those commands in. I never once resented that my computer needed to be interacted with via a keyboard. That is, after all, what computers do. That's what they're for—you type in commands, and the computer executes them for you, often with a "beep".

For a kid, this was empowering—taking my rudimentary understanding of language (aided, at first, by a handy DOS command cheat sheet) and weaving together strings of words that commanded the computer to do my bidding. It was like organizing runes to enact an ancient spell. It was magic. And I was a wizard. Did I miss not being able to "double click" or "drag and drop"? Of course not. I'd seen some such, mouse-driven user interfaces (like the early Macintoshes) but—from my vantage—that wasn't how computers really worked. I viewed such things as cool-looking, but not necessary. Computers use words. Powerful, magical words.

But this isn't 1980-something. In fact, it's barely 2010-something. (Did anyone else just realize that it's almost 2020?) For better or worse, how people use—and view—computers has changed dramatically since the days of Knight Rider. Modern operating systems are, often, belittled if they require users to interact with the machine via a command line. The graphical user interface is king. Which is, perhaps, the inevitable evolution of how we all interact with our computers.

Yet the value of the command line (or terminal, shell and so on) is still there. For many, it makes using computers more accessible. For others, it provides streamlined workflows that a mouse or touch-driven interface simply can't compete with. And, for others still, the blinking cursor provides a bit of nostalgic joy—or an aesthetically simple, and distraction free, environment.

This issue of Linux Journal celebrates the cursor—that wonderful blinking underscore and all the potential that it holds.

Go to Full Article
Bryan Lunduke

Linux Used More than Windows on Azure, Debian Asks for Help Testing Buster, Red Hat Announces Packit-as-a-Service, KaOS 2019.07 Released and Kernel 5.2-rc7 Is Out

2 weeks 3 days ago

News briefs for July 1, 2019.

Linux is now used more than Windows on Azure. According to ZDNet, Microsoft Linux kernel developer Sasha Levin revealed that "the Linux usage on our cloud has surpassed Windows" when requesting that Microsoft be allowed to join a Linux security developer list. The ZDNet piece concludes with "There are now at least eight Linux distros available on Azure. And that's not counting Microsoft's own Azure Sphere. This is a software and hardware stack designed to secure edge devices, which includes what Microsoft president Brad Smith declared 'a custom Linux kernel'. It's now a Linux world—even at Microsoft headquarters in Redmond, Washington."

Debian 10.0 "Buster" release images are available for testing. Phoronix reports that with the 10.0 release coming next weekend, the "near-final images" are uploaded and folks are encouraged to test them: "There is a call for 'smoke testing' of these Debian 10.0 images for AMD64 (x86_64), i386, MIPS, MIPSEL, MIPS64EL, PPC64EL, and s390x. The Debian Developers are aiming to ensure there are no release critical bugs. In particular they are looking for more testing of their live images on bare metal PCs in both BIOS (CSM) and UEFI boot modes." Read the "call for help" here.

Red Hat this morning announced the availability of Packit-as-a-Service, a GitHub app that uses the Packit project: "Using the Packit service in your upstream projects helps you continuously ensure that your projects work in Fedora OS. Just add one config file to your repository, along with the RPM spec file and you're almost there. We have started publishing docs for the service over here."

KaOS 2019.07 was released today. This rolling distro includes the latest packages for the Plasma Desktop (Frameworks 5.59.0, Plasma 5.16.2 and KDE Applications 19.04.2), all built on Qt 5.13.0. See the Download Page for installation instructions.

Linux 5.2-rc7 is out. Linus Torvalds writes (from "in the middle of nowhere on a boat"), "It's been _fairly_ calm. Would I have hoped for even calmer with my crappy internet? Sure. But hey, it's a lot smaller than rc6 was and I'm not really complaining. All small and fairly uninteresting. Arch updates, networking, core kernel, filesystems, misc drivers. Nothing stands out - just read the appended shortlog. It's small enough to be easy to just scroll through."

News Windows Microsoft Azure Debian Red Hat Packit-as-a-Service GitHub KaOS kernel KDE Desktop
Jill Franklin

Qt and LG Collaborating on webOS for Embedded Smart Devices, Valve to Continue Steam Gaming on Ubuntu, Qt Creator 4.10 Beta2 Released, The Official Raspberry Pi Beginner's Guide Updated for Raspberry Pi 4 and Opera 62 Now Available

2 weeks 6 days ago

News briefs for June 28, 2019.

Qt recently announced an expansion of its partnership with LG Electronics to collaborate on making open-source webOS the platform of choice for embedded smart devices. From the press release: "In order to meet and exceed challenging requirements and navigate the distinct market dynamics of the automotive, smart home and robotics industries, LG selected Qt as its business and technical partner for webOS. The most impactful technology trends of recent years, including AI, IoT and automation, require a new approach to the user experience (UX), and UX has been one of Qt's primary focus areas since the company's founding. Through the partnership, Qt will provide LG with the most powerful end-to-end, integrated and hardware-agnostic development environment for developers, engineers and designers to create innovative and immersive apps and devices. In addition, webOS will officially become a reference operating system of Qt."

Valve will continue Steam gaming on Ubuntu, now that Canonical announced it won't drop 32-bit software support in Ubuntu after all. ZDNet reports that "Ubuntu will no longer be called out as 'the best-supported path for desktop users.' Instead, Valve is re-thinking how it wants to approach distribution support going forward. There are several distributions on the market today that offer a great gaming desktop experience such as Arch Linux, Manjaro, Pop!_OS, Fedora, and many others."

Qt Creator 4.10 Beta2 was released today. The most notable fix in this version was a regression in the signing option for iOS devices. See the change log for all the bug fixes and new features, and go here to download the open-source version.

Raspberry Pi Press has released The Official Raspberry Pi Beginner's Guide, which has been fully updated for Raspberry Pi 4 and the latest version of the Raspbian OS (Buster). You can order a hard copy of the book here or get the free PDF here.

Opera 62 was released yesterday. Updates include an improved Dark Mode and support for Windows Dark theme. It also has created an option so you can connect your browser history to Speed Dial, so you can quickly return to tasks you've stared. See the full changelog for more details.

qt LG Electronics Embedded WebOS Valve Steam Ubuntu Canonical gaming Qt Creator Raspberry Pi Raspbian Opera
Jill Franklin

Without a GUI--How to Live Entirely in a Terminal

2 weeks 6 days ago
by Bryan Lunduke

Sure, it may be hard, but it is possible to give up graphical interfaces entirely—even in 2019.

About three years back, I attempted to live entirely on the command line for 30 days—no graphical interface, no X Server, just a big-old terminal and me, for a month.

I lasted all of ten days.

Why did I attempt this? What on Earth would compel a man to give up all the trappings and features of modern graphical desktops and, instead, artificially restrict himself to using nothing but text-based, command-line software, as if he were stuck in the early 1980s?

Who knows. Clearly, I make questionable decisions.

But you know, if I'm being honest, the experience was not entirely unpleasant. Sure, I missed certain niceties from the graphical side of things, but there were some distinct benefits to living in a shell. My computers, even the low-powered ones, felt faster (command-line software tends to be a whole lot lighter and leaner than those with a graphical user interface). Plus, I was able to focus and get more work done without all the distractions of a graphical desktop, which wasn't bad.

What follows are the applications I found myself relying upon the most during those fateful ten days, separated into categories. In some cases, these are applications I currently use over (or in addition to) their graphical equivalents.

Quite honestly, it is entirely possible to live completely without a GUI (more or less)—even today, in 2019. And, these applications make it possible—challenging, but possible.

Web Browsing

Plenty of command-line web browsers exist. The classic Lynx typically comes to mind, as does ELinks. Both are capable of browsing basic HTML websites just fine. In fact, the experience of doing so is rather enjoyable. Sure, most websites don't load properly in the "everything is a dynamically loading, JavaScript thingamadoodle" future we live in, but the ones that do load, load fast, and free of distractions, which makes reading them downright enjoyable.

But for me, personally, I recommend w3m.

Figure 1. Browsing Wikipedia with Inline Images Using w3m

w3m supports inline images (via installing the w3m-img package)—seriously, a web browser with image support, inside the terminal. The future is now.

It also makes filling out web forms easy—well, maybe not easy, but at least doable—by opening a configured text editor (such as nano or vim) for entering form text. It feels a little weird the first time you do it, but it's surprisingly intuitive.

Go to Full Article
Bryan Lunduke

Nextcloud Has a New Collaborative Rich Text Editor Called Nextcloud Text, GNOME Announces GNOME Usage, Linus Torvalds Warns of Future Hardware Issues, Red Hat Introduces Red Hat Insights and Offensive Security Launches OffSec Flex

3 weeks ago

News briefs for June 27, 2019.

Nextcloud announces a new collaborative rich text editor called Nextcloud Text. Nextcloud Text is described as not "a replacement to a full office suite, but rather a distraction-free, focused way of writing rich-text documents alone or together with others." See the Nextcloud blog post for more details.

GNOME announces GNOME Usage, a new app for visualizing system resources. The app was developed by Petr Stetka, a high-school intern in GNOME's Red Hat office in Brno. From the announcement: "Usage is powered by libgtop, the same library used by GNOME System Monitor. One is not a replacement for the other, they complement our user experience by offering two different use cases: Usage is for the everyday user that wants to check which application is eating their resources, and System Monitor is for the expert that knows a bit of operating system internals and wants more technical information being displayed." See the GNOME Wiki for more information on GNOME Usage.

Linus Torvalds this week warned attendees at KubeCon + CloudNative + Open Source Summit China that managing software will become more challenging, due to two hardware issues that are beyond DevOps teams' control. According to DevOps.com, the first issue is "the steady stream of patches being generated as new cybersecurity issues related to the speculative execution model that Intel and other processor vendors rely on to accelerate performance." And the second future hardware challenge is "as processor vendors approach the limits of Moore's Law, many developers will need to reoptimize their code to continue achieving increased performance. In many cases, that requirement will be a shock to many development teams that have counted on those performance improvements to make up for inefficient coding processes".

Red Hat introduces Red Hat Insights. Red Hat Insights is now included with Red Hat Enterprise Linux subscriptions, and it's described as "a Software-as-a-Service (SaaS) product that provides continuous, in-depth analysis of registered Red Hat-based systems to proactively identify threats to availability, security, performance and stability across physical, virtual and cloud environments. Insights works off of an intelligent rules engine, comparing system configuration information to rules in order to identify issues, often before a problem occurs." See the Red Hat Insights Get Started Page for more information.

Offensive Security launches OffSec Flex, "a new program for enterprises to simplify the cybersecurity training process and allow organizations to invest more in cyber security skills development". Some of its training courses and certifications include the Penetration Testing with Kali Linux (PWK) course and the Offensive Security Certified Professional (OSCP) along with the Advance Web Attacks and Exploitations (AWAE) course and the Offensive Security Web Expert (OSWE). Go here to learn more.

News Nextcloud GNOME Linus Torvalds DevOps Red Hat Security OffSec Flex
Jill Franklin

FreeDOS's Linux Roots

3 weeks ago
by Jim Hall

On June 29, 2019, the FreeDOS Project turns 25 years old. That's a major milestone for any open-source software project! In honor of this anniversary, Jim Hall shares this look at how FreeDOS got started and describes its Linux roots.

The Origins of FreeDOS

I've been involved with computers from an early age. In the late 1970s, my family bought an Apple II computer. It was here that I taught myself how to write programs in AppleSoft BASIC. These were not always simple programs. I quickly advanced from writing trivial "math quiz" programs to more complex "Dungeons and Dragons"-style adventure games, complete with graphics.

In the early 1980s, my parents replaced the Apple with an IBM Personal Computer running MS-DOS. Compared to the Apple, the PC had a much more powerful command line. You could connect simple utilities and commands to do more complex functions. I fell in love with DOS.

Throughout the 1980s and into the early 1990s, I considered myself a DOS "power user". I taught myself how to write programs in C and created new DOS command-line utilities that enhanced my MS-DOS experience. Some of my custom utilities merely reproduced the MS-DOS command line with a few extra features. Other programs added new functionality to my command-line experience.

I discovered Linux in 1993 and instantly recognized it as a Big Deal. Linux had a command line that was much more powerful than MS-DOS, and you could view the source code to study the Linux commands, fix bugs and add new features. I installed Linux on my computer, in a dual-boot configuration with MS-DOS. Since Linux didn't have the applications I needed as a working college student (a word processor to write class papers or a spreadsheet program to do physics lab analysis), I booted into MS-DOS to do much of my classwork and into Linux to do other things. I was moving to Linux, but I still relied on MS-DOS.

In 1994, I read articles in technology magazines saying that Microsoft planned to do away with MS-DOS soon. The next version of Windows would not use DOS. MS-DOS was on the way out. I'd already tried Windows 3, and I wasn't impressed. Windows was not great. And, running Windows would mean replacing the DOS applications that I used every day. I wanted to keep using DOS. I decided that the only way to keep DOS was to write my own. On June 29, 1994, I announced my plans on the Usenet discussion group comp.os.msdos.apps, and things took off from there:

ANNOUNCEMENT OF PD-DOS PROJECT:

A few months ago, I posted articles relating to starting a public domain version of DOS. The general support for this at the time was strong, and many people agreed with the statement, "start writing!" So, I have...

Go to Full Article
Jim Hall

people.kernel.org Has Launched, GitLab 12.0 Released, TheoTown Now on Steam for Linux, Pulseway Introduces New File Transfer Feature, and SUSE Manager 4 and SUSE Manager for Retail 4 Are Now Available

3 weeks 1 day ago

News briefs for June 26, 2019.

Konstantin Ryabitsev yesterday announced the launch of people.kernel.org to replace Google+ for kernel developers. people.kernel.org is "an ActivityPub-enabled federated platform powered by WriteFreely and hosted by very nice and accommodating folks at write.as." Initially the service is being rolled out to those listed in the kernel's MAINTAINERS file. See the about page for more information.

GitLab 12.0 was released yesterday. From the announcement: "GitLab 12.0 marks a key step in our journey to create an inclusive approach to DevSecOps, empowering "everyone to contribute". For the past year, we've been on an amazing journey, collaborating and creating a solution that brings teams together. There have been thousands of community contributions making GitLab more lovable. We believe everyone can contribute, and we've enabled cross-team collaboration, faster delivery of great code, and bringing together Dev, Ops, and Security."

TheoTown, the retro-themed city-building game, is now available on Steam for Linux. GamingOnLinux reports that "On Android at least, the game is very highly rated and I imagine a number of readers have played it there so now you can pick it up again on your Linux PC and continue building the city of your dreams. So far, the Steam user reviews are also giving it a good overall picture." You can find TheoTown on Steam.

Pulseway introduces its new File Transfer feature to the Pulseway Remote Desktop app. With File Transfer, "businesses can now send and receive files from both the source and destination endpoint". Go here for more details on Pulseway's File Transfer capabilities.

SUSE Manager 4 and SUSE Manager for Retail 4 are now available. The press release notes that these open-source infrastructure management solutions "help enterprise DevOps and IT operations teams reduce complexity and regain control of IT assets no matter where they are, increase efficiency while meeting security policies, and optimize operations via automation to reduce costs". Go here to learn more about SUSE Manager and here for more information on SUSE Manager for Retail.

News kernel people.kernel.org GitLab gaming Steam Pulseway Desktop SUSE DevOps
Jill Franklin

Ten Years of "Linux in the GNU/South": an Overview of SELF 2019

3 weeks 1 day ago
by Matthew R. Higgins

Highlights of the 2019 Southeast LinuxFest.

The tenth annual SouthEast LinuxFest (SELF) was held on the weekend of June 14–16 at the Sheraton Charlotte Airport Hotel in Charlotte, North Carolina. Still running strong, SELF serves partially as a replacement for the Atlanta Linux Showcase, a former conference for all things Linux in the southeastern United States. Since 2009, the conference has provided a venue for those living in the southeastern United States to come and listen to talks by speakers who all share a passion for using Linux-based operating systems and free and open-source software (FOSS). Although some of my praises of the conference are not exclusive to SELF, the presence of such a conference in the "GNU/South" has the long-term potential to have a significant effect on the Linux and FOSS community.

Despite facing several challenges along the way, SELF's current success is the result of what is now ten years of hard work by the conference organizers, who currently are led by Jeremy Sands, one of the founding members of the conference. Scanning through the materials for SELF 2019, however, there is no mention that this year's conference marked a decade of "Linux in the GNU/South". It actually wasn't until the conference already was over that I realized this marked SELF's decennial anniversary. I initially asked myself why this wasn't front and center on event advertisements, but looking back on SELF, neglecting questions such as "how long have we been going?" and instead focusing on "what is going on now?" and "where do we go from here?" speaks to the admirable spirit and focus of the conference and its attendees. This focus on the content of SELF rather than SELF itself shows the true passion for the Linux community rather than any particular organization or institution that benefits off the community.

Another element worthy of praise is SELF's "all are welcome" atmosphere. Whether attendees were met with feelings of excitement to return to an event they waited 362 days for or a sense of apprehension as they stepped down the L-shaped hall of conference rooms for the first time, it took little time for the contagious, positive energy to take its effect. People of all ages and all skill levels could be seen intermingling and enthusiastically inviting anybody who was willing into their conversations and activities. The conference talks, which took all kinds of approaches to thinking about and using Linux, proved that everybody is welcome to attend and participate at the event.

Go to Full Article
Matthew R. Higgins

Canonical to Continue Building Selected 32-Bit i386 Packages for Ubuntu 19.10, Azul Systems Announces Zulu Mission Control v7.0, Elisa v. 0.4.1 Now Available, Firefox Adds Fission to the Nightly Build and Tails Emergency Release

3 weeks 2 days ago

News briefs for June 25, 2019.

After much feedback from the community, Canonical yesterday announced it will continue to build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. The statement notes that Canonical "will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term."

Azul Systems announces Zulu Mission Control v7.0. From the press release: "Based on the OpenJDK Mission Control project, Zulu Mission Control is a powerful Java performance management and application profiling tool that works with Azul's Zing and Zulu JDKs/JVMs and supports both Java SE 8 and 11. Zulu Mission Control is free to use, and may be downloaded from www.azul.com/products/zulu-mission-control."

Version 0.4.1 of KDE's Elisa music player is now available. Some fixes with this release include improved accessibility, improved focus handling and an improved build system. You can get the source code tarball here.

Firefox recently added Fission to its latest nightly build. Softpedia News quotes developer Nika Layzell on the new site isolation feature: "We aim to build a browser which isn't just secure against known security vulnerabilities, but also has layers of built-in defense against potential future vulnerabilities. To accomplish this, we need to revamp the architecture of Firefox and support full Site Isolation. We call this next step in the evolution of Firefox's process model 'Project Fission'. While Electrolysis split our browser into Content and Chrome, with Fission, we will "split the atom", splitting cross-site iframes into different processes than their parent frame."

Tails announced an emergency release this week, 3.14.2, to address a critical security vulnerability in the Tor browser. Be sure to update the Tor Browser to version 8.5.3 to fix the sandbox escape vulnerability. Go here to download.

News Canonical Ubuntu Zulu Mission Control Azul Java Elisa Firefox Security Fission Tails
Jill Franklin

Deprecating a.out Binaries

3 weeks 2 days ago
by Zack Brown

Remember a.out binaries? They were the file format of the Linux kernel till around 1995 when ELF took over. ELF is better. It allows you to load shared libraries anywhere in memory, while a.out binaries need you to register shared library locations. That's fine at small scales, but it gets to be more and more of a headache as you have more and more shared libraries to deal with. But a.out is still supported in the Linux source tree, 25 years after ELF became the standard default format.

Recently, Borislav Petkov recommended deprecating it in the source tree, with the idea of removing it if it turned out there were no remaining users. He posted a patch to implement the deprecation. Alan Cox also remarked that "in the unlikely event that someone actually has an a.out binary they can't live with, they can also just write an a.out loader as an ELF program entirely in userspace."

Richard Weinberger had no problem deprecating a.out and gave his official approval of Borislav's patch.

In fact, there's a reason the issue happens to be coming up now, 25 years after the fact. Linus Torvalds pointed out:

I'd prefer to try to deprecate a.out core dumping first....That's the part that is actually broken, no?

In fact, I'd be happy to deprecate a.out entirely, but if somebody _does_ complain, I'd like to be able to bring it back without the core dumping.

Because I think the likelihood that anybody cares about a.out core dumps is basically zero. While the likelihood that we have some odd old binary that is still a.out is slightly above zero.

So I'd be much happier with this if it was a two-stage thing where we just delete a.out core dumping entirely first, and then deprecate even running a.out binaries separately.

Because I think all the known *bugs* we had were with the core dumping code, weren't they?

Removing it looks trivial. Untested patch attached.

Then I'd be much happier with your "let's deprecate a.out entirely" as a second patch, because I think it's an unrelated issue and much more likely to have somebody pipe up and say "hey, I have this sequence that generates executables dynamically, and I use a.out because it's much simpler than ELF, and now it's broken". Or something.

Jann Horn looked over Linus' patch and suggested additional elements of a.out that would no longer be used by anything, if core dumping was coming out. He suggested those things also could be removed with the same git commit, without risking anyone complaining.

Go to Full Article
Zack Brown

Raspberry Pi 4 on Sale Now, SUSE Linux Enterprise 15 Service Pack 1 Released, Instaclustr Service Broker Now Available, Steam for Linux to Drop Support for Ubuntu 19.10 and Beyond, and Linux 5.2-rc6 Is Out

3 weeks 3 days ago

News briefs for June 24, 2019.

Raspberry Pi 4 is on sale now, starting at $35. The Raspberry Pi blog post notes that "this is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line". This version also comes with different memory options (1GB for $35, 2GB for $45 or 4GB for $55). You can order one from approved resellers here.

SUSE releases SUSE Linux Enterprise 15 Service Pack 1 on its one-year anniversary of launching the world's first multimodal OS. From the SUSE blog: "SUSE Linux Enterprise 15 SP1 advances the multimodal OS model by enhancing the core tenets of common code base, modularity and community development while hardening business-critical attributes such as data security, reduced downtime and optimized workloads." Some highlights include faster and easier transition from community Linux to enterprise Linux, enhanced support for edge to HPC workloads and improved hardware-based security. Go here for release notes and download links.

Instaclustr announces the availability of its Instaclustr Service Broker. This release "enables customers to easily integrate their containerized applications, or cloud native applications, with open source data-layer technologies provided by the Instaclustr Managed Platform—including Apache Cassandra and Apache Kafka. Doing so enables organizations—cloud native applications to leverage key capabilities of the Instaclustr platform such as automated service discovery, provisioning, management, and deprovisioning of data-layer clusters." Go here for more details.

Valve developer announces that Steam for Linux will drop support for the upcoming Ubuntu 19.10 release and future Ubuntu releases. Softpedia News reports that "Valve's harsh announcement comes just a few days after Canonical's announcement that they will drop support for 32-bit (i386) architectures in Ubuntu 19.10 (Eoan Ermine). Pierre-Loup Griffais said on Twitter that Steam for Linux won't be officially supported on Ubuntu 19.10, nor any future releases. The Steam developer also added that Valve will focus their efforts on supporting other Linux-based operating systems for Steam for Linux. They will be looking for a GNU/Linux distribution that still offers support for 32-bit apps, and that they will try to minimize the breakage for Ubuntu users."

Linux 5.2-rc6 was released on Saturday. Linus Torvalds writes, "rc6 is the biggest rc in number of commits we've had so far for this 5.2 cycle (obviously ignoring the merge window itself and rc1). And it's not just because of trivial patches (although admittedly we have those too), but we obviously had the TCP SACK/fragmentation/mss fixes in there, and they in turn required some fixes too." He also noted that he's "still reasonably optimistic that we're on track for a calm final part of the release, and I don't think there is anything particularly bad on the horizon."

News Raspberry Pi SUSE Instaclustr Containers cloud native Valve Steam Ubuntu kernel
Jill Franklin

Python's Mypy--Advanced Usage

3 weeks 3 days ago
by Reuven M. Lerner

Mypy can check more than simple Python types.

In my last article, I introduced Mypy, a package that enforces type checking in Python programs. Python itself is, and always will remain, a dynamically typed language. However, Python 3 supports "annotations", a feature that allows you to attach an object to variables, function parameters and function return values. These annotations are ignored by Python itself, but they can be used by external tools.

Mypy is one such tool, and it's an increasingly popular one. The idea is that you run Mypy on your code before running it. Mypy looks at your code and makes sure that your annotations correspond with actual usage. In that sense, it's far stricter than Python itself, but that's the whole point.

In my last article, I covered some basic uses for Mypy. Here, I want to expand upon those basics and show how Mypy really digs deeply into type definitions, allowing you to describe your code in a way that lets you be more confident of its stability.

Type Inference

Consider the following code:

x: int = 5 x = 'abc' print(x)

This first defines the variable x, giving it a type annotation of int. It also assigns it to the integer 5. On the next line, it assigns x the string abc. And on the third line, it prints the value of x.

The Python language itself has no problems with the above code. But if you run mypy against it, you'll get an error message:

mytest.py:5: error: Incompatible types in assignment (expression has type "str", variable has type "int")

As the message says, the code declared the variable to have type int, but then assigned a string to it. Mypy can figure this out because, despite what many people believe, Python is a strongly typed language. That is, every object has one clearly defined type. Mypy notices this and then warns that the code is assigning values that are contrary to what the declarations said.

In the above code, you can see that I declared x to be of type int at definition time, but then assigned it to a string, and then I got an error. What if I don't add the annotation at all? That is, what if I run the following code via Mypy:

Go to Full Article
Reuven M. Lerner