Linux Journal

Star Wars Jedi Challenges Gets Lightsaber Versus Mode, Version 0.1 of Kubeflow Released, Arch Linux 2018.05.01 Snapshot Now Available and More

2 months 2 weeks ago

News briefs for May 4, 2018.

Star Wars: Jedi Challenges is getting a free update called "Lightsaber Versus Mode", which adds local multiplayer to the previously single-player game, The Verge reports. The update is available on the Google Play store, but it also requires two of the Lenovo Mirage AR systems, two headsets, two lightsaber controllers and two light-up tracking beacons set to different colors. For this game, you can't "just hack away at your opponent; it procedurally generates a battle, using familiar elements from the single-player dueling mode".

The Arch Linux 2018.05.01 snapshot was released this week. This is the first to include the Linux 4.16 kernel, with mitigations for Meltdown and Spectre, updates for several drivers, improved KVM support and more. Note that this snapshot is only for new deployments. (Source: Softpedia News.)

Google today announced the release of version 0.1 of the open-source Kubeflow tool, which is "designed to bring machine learning to Kubernetes containers". According to TechCrunch, "the idea behind the project is to enable data scientists to take advantage of running machine learning jobs on Kubernetes clusters. Kubeflow lets machine learning teams take existing jobs and simply attach them to a cluster without a lot of adapting."

Google also has open-sourced gVisor, a new way to sandbox containers to "provide a secure isolation boundary between the host operating system and the application running within the container", ZDNet reports. gVisor's core is "is a kernel that runs as a normal, unprivileged process that supports most Linux system calls. This kernel, like LXD, is written in Go, which was chosen for its memory- and type-safety".

According to The Register, "researchers have unearthed a fresh new set of ways attackers could potentially exploit data-leaking Spectre CPU vulnerabilities in Intel chips". Currently, there is only information on Intel's plans for patches, but there is evidence that some ARM CPUs also are vulnerable.

News gaming Google Kubernetes Containers Security Intel Spectre Meltdown Arch Linux
Jill Franklin

Privacy Is Still Personal

2 months 2 weeks ago
by Doc Searls

We solved privacy in the natural world with clothing, shelter, manners and laws. So far in the digital world, we have invisibility cloaks and the GDPR. The fastest way to get the rest of what we need is to recognize that privacy isn't a grace of platforms or governments.

In the physical world, privacy isn't controversial. In the digital world, it is.

The difference is that we've had thousands of years to work out privacy in the physical world, and about 20 in the digital one. So it should help to cut ourselves a little slack while we come up with the tech, plus the manners and laws to go with it—in that order. (Even though the gun has been jumped in some cases.)

To calibrate a starting perspective, it might help to start with what Yuval Noah Harari says in his book Sapiens: A Brief History of Humankind:

Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights—and the money paid out in fees. Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.

And yet this common imagination is what gives us civilization. We are civil to one another because of all the imaginings we share. And technologies are what make many of those imaginings possible. Those come first. Without the technologies making privacy possible, we would have none of the common manners and civic laws respecting it.

First among those technologies is clothing.

Nature didn't give us clothing. We had to make it from animal skins and woven fabrics. One purpose, of course, was to protect us from cold and things that might hurt us. But another was to conceal what today we politely call our "privates", plus other body parts we'd rather not show.

Second among those technologies was shelter. With shelter we built and marked personal spaces, and valved access and exposure to those spaces with doors, windows and shades.

How we use clothing and shelter to afford ourselves privacy differs between cultures and settings, but is well understood by everyone within both.

With clothing and shelter, we also can signal to others what personal spaces it is okay and not okay to visit, and under what conditions. The ability to send, receive and respect those signals, and to agree about what they mean, are essential for creating order within a civilization, and laws as well.

As of today, we have very little clothing and shelter in the digital world.

Yes, we do have ways of remaining hidden or anonymous (for example, with crypto and Tor), and selectively revealing facts about ourselves (for example with PKI: public key infrastructure). And services have grown up around those developments, such as VPNs. (Disclosure: Linux Journal's sister company is Private Internet Access, a VPN. See my interview in this issue with Andrew Lee, founder of PIA, about the company's decision to open source its code.) We also have prophylaxis against tracking online, thanks to browser extensions and add-ons, including ad blockers that also stop tracking.

As clothing goes, this is something like having invisibility cloaks and bug spray before we get shirts, pants and underwear. But hey, they work, and they're a start.

We need more, but what? Look for answers elsewhere in this issue. In the meantime, however, don't assume that privacy is a grace of companies' (especially platforms') privacy policies. Here are three things worth knowing about those:

  1. They can be changed whenever the company pleases.
  2. They are not an agreement between you and the company.
  3. They are theirs, not yours.

Alas, nearly all conversation about privacy in governments and enterprises assumes that your privacy is mostly their concern.

Here's how I framed an approach to solving privacy three years ago here, in a column titled "Privacy Is Personal":

So the real privacy challenge is a simple one. We need clothing with zippers and buttons, walls with doors and locks, windows with shutters and shades—that work the same for each and all of us, to give us agency and scale.

Giants aren't going to do it for us. Nor are governments. Both can be responsive and supportive, but they can't be in charge, or that will only make us worse victims than we are already. Privacy for each of us is a personal problem online, and it has to be solved at the personal level. The only corporate or "social" clothing and shelter online are the equivalents of prison garb and barracks.

What would our clothing and shelter be, specifically? A few come to mind:

  • Ways to encrypt and selectively share personal data easily with other parties we have reason to trust.
  • Ways to know the purposes to which shared data is used.
  • Ways to assert terms and policies and obtain agreement with them.
  • Ways to assert and maintain sovereign identities for ourselves and manage our many personal identifiers—and to operate anonymously by default with those who don't yet know us. (Yes, administrative identifiers are requirements of civilization, but they are not who we really are, and we all know that.)
  • Ways to know and protect ourselves from unwelcome intrusion in our personal spaces.

All these things need to be as casual and easily understood as clothing and shelter are in the physical world today. They can't work only for wizards. Privacy is for muggles too. Without agency and scale for muggles, the Net will remain the Land of Giants, who regard us all as serfs by default.

Now that we have support from the GDPR and other privacy laws popping up around the world, we can start working our way down that punch list.

Privacy
Doc Searls

Facebook Open-Sources Its PyTorch AI Framework, Kitty Malware Targets Drupal, GCC 8.1 Released and More

2 months 2 weeks ago

News briefs for May 3, 2018.

Facebook has open-sourced its PyTorch 1.0 AI framework. Facebook was using the framework in-house for its machine learning projects, but now it is free for developers to use as well. According to the story on ZDNet, "PyTorch 1.0 integrates PyTorch's research-oriented aspects with the modular, production-focused capabilities of Caffe2, a popular deep learning framework and ONNX (Open Neural Network Exchange), an open format to represent deep learning models."

Kitty malware is targeting Drupal to mine for cryptocurrency. ZDNet reports that "The vulnerability allows threat actors to employ various attack vectors to compromise Drupal websites. Scanning, backdoor implementation, and cryptocurrency mining are all possible, as well as a data theft and account hijacking." And even worse, "the malware is also commanded to infect other web resources with a mining script dubbed me0w.js", which attacks any future visitors of the web site as well.

The Steam Controller team announced yesterday that the latest Steam Client beta supports the Nintendo Switch Pro Controller. The announcement adds that "the d-pad is ideal for fighting games and platformers and the gyro enhances aim in your action/FPS titles."

GCC 8.1 was released yesterday. This is a major release and contains "substantial new functionality not available in GCC 7.x or previous GCC releases". See this page for a summary of the "huge number of improvements", including improvements to inter-procedural, profile-driven and link-time optimizations.

The results are in for openSUSE's board election: Gertjan Lettink (Knurpht), Simon Lees and Ana Maria Martinez will serve a two-year term. Congrats to the winners!

News Facebook AI open source Drupal Security Cryptomining gaming GCC openSUSE
Jill Franklin

Review: the Librem 13v2

2 months 2 weeks ago
by Shawn Powers

The Librem 13—"the first 13-inch ultraportable designed to protect your digital life"—ticks all the boxes, but is it as good in real life as it is on paper?

I don't think we're supposed to call portable computers "laptops" anymore. There's something about them getting too hot to use safely on your lap, so now they're officially called "notebooks" instead. I must be a thrill-seeker though, because I'm writing this review with the Librem 13v2 directly on my lap. I'm wearing pants, but apart from that, I'm risking it all for the collective. The first thing I noticed about the Librem 13? The company refers to it as a laptop. Way to be brave, Purism!

Why the Librem?

I have always been a fan of companies who sell laptops (er, notebooks) pre-installed with Linux, and I've been considering buying a Purism laptop for years. When our very own Kyle Rankin started working for the company, I figured a company smart enough to hire Kyle deserved my business, so I ordered the Librem 13 (Figure 1). And when I ordered it, I discovered I could pay with Bitcoin, which made me even happier!

Figure 1. The 13" Librem 13v2 is the perfect size for taking on the road (photo from Purism)

There are other reasons to choose Purism computers too. The company is extremely focused on privacy, and it goes so far as to have hardware switches that turn off the webcam and WiFi/Bluetooth radios. And because they're designed for open-source operating systems, there's no "Windows" key; instead there's a meta key with a big white rectangle on it, which is called the Purism Key (Figure 2). On top of all those things, the computer itself is rumored to be extremely well built, with all the bells and whistles usually available only on high-end top-tier brands.

Figure 2. No Windows key here! This beats a sticker-covered Windows logo any day (photo from Purism).

My Test Unit

Normally when I review a product, I get whatever standard model the company sends around to reviewers. Since this was going to be my actual daily driver, I ordered what I wanted on it. That meant the following:

  • i7-6500U processor, which was standard and not upgradable, and doesn't need to be!
  • 16GB DDR4 RAM (default is 4GB).
  • 500GB M.2 NVMe (default is 120GB SATA SSD).
  • Intel HD 520 graphics (standard, not upgradable).
  • 1080p matte IPS display.
  • 720p 1-megapixel webcam.
  • Elantech multitouch trackpad.
  • Backlit keyboard.

The ports and connectors on the laptops are plentiful and well laid out. Figure 3 shows an "all sides" image from the Purism website. There are ample USB ports, full-size HDMI, and the power connector is on the side, which is my preference on laptops. In this configuration, the laptop cost slightly more than $2000.

Figure 3. There are lots of ports, but not in awkward places (photo from Purism).

The Physical Stuff and Things

The Case

The shell of the Librem 13 is anodized aluminum with a black matte texture. The screen's exterior is perfectly plain, without any logos or markings. It might seem like that would feel generic or overly bland, but it's surprisingly elegant. Plus, if you're the sort of person who likes to put stickers on the lid, the Librem 13 is a blank canvas. The underside is nearly as spartan with the company name and little else. It has a sturdy hinge, and it doesn't feel "cheap" in any way. It's hard not to compare an aluminum case to a MacBook, so I'll say the Librem 13 feels less "chunky" but almost as solid.

The Screen

Once open, the screen has a matte finish, which is easy to see and doesn't have the annoying reflection so prevalent on laptops that have a glossy finish. I'm sure there's a benefit to a glossy screen, but whatever it might be, the annoying glare nullifies the benefit for me. The Librem 13's screen is bright, has a sufficient 1080p resolution, and it's pleasant to stare at for hours. A few years back, I'd be frustrated with the limitation of a 1080p (1920x1080) resolution, but as my eyes get older, I actually prefer this pixel density on a laptop. With a higher-res screen, it's hard to read the letters without jacking up the font size, eliminating the benefit of the extra pixels!

The Keyboard

I'm a writer. I'm not quite as old-school as Kyle Rankin with his mechanical PS/2 keyboard, but I am very picky when it comes to what sort of keys are on my laptop. Back in the days of netbooks, I thought a 93%-sized keyboard would be perfectly acceptable for lengthy writing. I was horribly wrong. I didn't realize a person could get cramps in their hands, but after an hour of typing, I could barely pick my nose much less type at speed.

The Librem 13's keyboard is awesome. I won't say it's the best keyboard I've ever used, but as far as laptops go, it's right near the top of the pile. Like most (good) laptops, the Librem 13 has Chicklet style keys, but the subtleties of click pressure, key travel, springiness factor and the like are very adequate. The Librem 13v2 has a new feature, in that the keys are backlit (Figure 4). Like most geeks, I'm a touch typist, but in a dark room, it's still incredibly nice to have the backlight. Honestly, I'm not sure why I appreciate the backlight so much, but I've tried both on and off, and I really hate when the keyboard is completely dark. That might just be a personal preference, but having the choice means everyone is happy.

Figure 4. I don't notice the keyboard after hours of typing, which is what you want in a keyboard (photo from Purism).

The Trackpad

The Librem 13 has a huge (Figure 5), glorious trackpad. Since Apple is known for having quality hardware, it's only natural to compare the Librem 13 to the Macbook Pro (again). For more than a decade, Apple has dominated the trackpad scene. Using a combination of incredible hardware and silky smooth software, the Apple trackpad has been the gold standard. Even if you hate Apple, it's impossible to deny its trackpads have been better than any other—until recently. The Librem 13v2 has a trackpad that is 100% as nice as MacBook trackpads. It is large, supports "click anywhere" and has multipoint support with gestures. What does all that mean? The things that have made Apple King of Trackpad Land are available not only on another company's hardware, but also with Linux. My favorite combination is two-finger scrolling with two-finger clicking for "right-click". The trackpad is solid, stable and just works. I'd buy the Librem 13 for the trackpad alone, but that's just a throwaway feature on the website.

Figure 5. This trackpad is incredible. It's worth buying the laptop for this feature alone (photo from Purism).

The Power Adapter

It might seem like a silly thing to point out, but the Librem 13 uses a standard 19-volt power adapter with a 5.5mm/2.5mm barrel connector. Why is that significant? Because I accidentally threw my power supply away with the box, and I was worried I'd have to special-order a new one. Thankfully, the dozen or so power supplies I have in my office from netbooks, NUCs and so on fit the Librem 13 perfectly. Although I don't recommend throwing your power supply away, it's nice to know replacements are easy to find online and probably in the back of your tech junk drawer.

Hardware Switches

I'm not as security-minded as perhaps I should be. I'm definitely not as security-minded as many Linux Journal readers. I like that the Librem 13 has physical switches that disconnect the webcam and WiFi/Bluetooth. For many of my peers, the hardware switches are the single biggest selling point. There's not much to say other than that they work. They physically switch right to left as opposed to a toggle, and it's clear when the physical connection to the devices have been turned off (Figure 6). With the Librem 13, there's no need for electrical tape over the webcam. Plus, using your computer while at DEFCON isn't like wearing a meat belt at the dog pound. Until nanobots become mainstream, it's hard to beat the privacy of a physical switch.

Figure 6. It's not possible to accidentally turn these switches on or off, which is awesome (photo from Purism).

I worried a bit about how the operating systems would handle hardware being physically disconnected. I thought perhaps you'd need special drivers or custom software to handle the disconnect/reconnect. I'm happy to report all the distributions I've tried have handled the process flawlessly. Some give a pop-up about devices being connected, and some quietly handle it. There aren't any reboots required, however, which was a concern I had.

Audio/Video

I don't usually watch videos on my laptop, but like most people, I will show others around me funny YouTube videos. The audio on the Librem 13 is sufficiently loud and clear. The video subsystem (I mention more about that later) plays video just fine, even full screen. There is also an HDMI port that works like an HDMI connection should. Modern Linux distributions are really good at handling external displays, but every time I plug in a projector and it just works, my heart sings!

PureOS

The Librem 13 comes with Purism's "PureOS" installed out of the box. The OS is Debian-based, which I'm most comfortable using. PureOS uses its own repository, hosted and maintained by Purism. One of the main reasons PureOS exists is so that Purism can make sure there is no closed-source code or proprietary drivers installed on its computers. Although the distro includes tons of packages, the really impressive thing is how well the laptop works without any proprietary code. The "purity" of the distribution is comforting, but the standout feature is how well Purism chose the hardware. Anyone who has used Linux laptops knows there's usually a compromise regarding proprietary drivers and wrappers in order to take full advantage of the system. Not so with the Librem 13 and PureOS. Everything works, and works well.

PureOS works well, but the most impressive aspect of it is what it does while it's working. The pre-installed hard drive walks you through encryption on the first boot. The Firefox-based browser (called "Purebrowser") uses HTTPS: Everywhere, defaults to DuckDuckGo as the search engine, and if that's not sufficient for your privacy needs, it includes the Tor browser as well. The biggest highlight for me was that since Purebrowser is based on Firefox, the browsing experience wasn't lacking. It didn't "feel" like I was running a specialized browser to protect my identity, which makes doing actual work a lot easier.

Other Distributions

Although I appreciate PureOS, I also wanted to try other options. Not only was I curious, but honestly, I'm stuck in my ways, and I prefer Ubuntu MATE as my desktop interface. The good news is that although I'm not certain the drivers are completely open source, I am sure that Ubuntu installs and works very well. There are a few glitches, but nothing serious and nothing specific to Ubuntu (more on those later).

I tried a handful of other distributions, and they all worked equally well. That makes sense, since the hardware is 100% Linux-compatible. There was an issue with most distributions, which isn't the fault of the Librem 13. Since my system has the M.2 NVMe as opposed to a SATA SSD, most installers have a difficult time determining where to install the bootloader. Frustratingly, several versions of the Ubuntu installer don't let the manual selection of the correct partition to be chosen either. The workaround seems to be setting up hard drive partitions manually, which allows the bootloader partition to be selected. (For the record, it's /dev/nvme0n1.) Again, this isn't Purism's fault; rather, it's the Linux community getting up to speed with NVMe drives and EFI boot systems.

Quirks

There are a few oddities with a freshly installed Librem 13. Most of the quirks are ironed out if you use the default PureOS, but it's worth knowing about the issues in case you ever switch.

NVMe Thing

As I mentioned, the bootloader problem with an NVMe system is frustrating enough that it's worth noting again in this list. It's not impossible to deal with, but it can be annoying.

Backslash Key

The strangest quirk with the Librem 13 is the backslash key. It doesn't map to backslash. On every installation of Linux, when you try to type backslash, you get the "less than" symbol. Thankfully, fixing things like keyboard scancodes is simple in Linux, but it's so strange. I have no idea how the non-standard scancode slipped through QA, but nonetheless, it's something you'll need to deal with. There's a detailed thread on the Purism forum that makes fixing the problem simple and permanent.

Trackpad Stuff

As I mentioned before, the trackpad on the Librem 13 is the nicest I've ever used on a non-Apple laptop. The oddities come with various distributions and their trackpad configuration software. If your distribution doesn't support the gestures and/or multipoint settings you expect, rest assured that the trackpad supports every feature you are likely to desire. If you can't find the configuration in your distro's setup utility, you might need to dig deeper.

The Experience and Summary

The Librem 13 is the fastest laptop I've ever used. Period. The system boots up from a cold start faster than most laptops wake from sleep. Seriously, it's insanely fast. I ran multiple VMs without any significant slowdowns, and I was able to run multiple video-intensive applications without thinking "laptops are so slow" or anything like that.

The only struggle I had was when I tried to use the laptop for live streaming to Facebook using OBS (Open Broadcast Studio). The live transcoding really taxed the CPU. It was able to keep up, but normally on high-end computers, it's easier to offload the transcoding to a discrete video card. Unfortunately, there aren't any non-Intel video systems that work well without proprietary drivers. That means even though the laptop is as high-end as they get, the video system works well, but it can't compare to a system with a discrete NVIDIA video card.

Don't let the live streaming situation sour your view of the Librem 13 though. I had to try really hard to come up with something that the Librem 13 didn't chew through like the desktop replacement it is. And even with my live streaming situation, I was able to transcode the video using the absurdly fast i7 CPU. This computer is lightning fast, and it's easily the best laptop I've ever owned. More than anything, I'm glad this is a system I purchased and not a "review copy", so I don't have to send it back!

Reviews Hardware Laptops Security Privacy Purism Librem 13
Shawn Powers

THRONES OF BRITANNIA Coming Soon to Linux, NVIDIA Tesla V100 GPUs Now on Google Cloud, LINBIT Announces LINSTOR and More

2 months 2 weeks ago

News briefs for May 2, 2018.

Feral Interactive tweeted yesterday that THRONES OF BRITANNIA will be released for Linux soon: "We are closing fast on the macOS and Linux versions, and are currently *aiming* for macOS and Linux releases one to two months after the Windows release on May 3rd."

Google Cloud announced this week that NVIDIA Tesla V100 GPUs (beta) are now available on Google Computer Engine and Kubernetes Engine. According to the ZDNet story, "The Tesla V100 GPU equates to 100 CPUs, giving customers more power to handle computationally demanding applications, like machine learning, analytics, and video processing."

LINBIT recently announced the public beta of LINSTOR, "new open-source software-defined storage available for Kubernetes and OpenShift environments". According to the LINBIT announcement, "LINSTOR takes advantage of DRBD, a part of the Linux kernel for nearly a decade, to deliver fast and reliable data replication. By simplifying storage cluster configuration and ongoing management, then plugging into cloud and container front-ends, users get the resilient infrastructure they need while retaining flexibility to choose vendors."

Red Hat and the Kubernetes community yesterday announced the Operator Framework, a new open-source toolkit for "managing Kubernetes native applications, called Operators, in a more effective, automated and scalable way". They describe the concept like this: "an Operator takes human operational knowledge and encodes it into software that is more easily packaged and shared with consumers. Think of an Operator as an extension of the software vendor's engineering team that watches over your Kubernetes environment and uses its current state to make decisions in milliseconds."

Google Cloud yesterday launched Cloud Composer (beta), a "fully managed workflow orchestration service built on Apache Airflow". Cloud Composer "empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers". Also, as it is built on the open-source Apache Airflow project and operated with Python, Cloud Composer is "free from lock-in and easy to use". See the TechCrunch story for more details.

News gaming Cloud Kubernetes Google Storage OpenShift Red Hat Containers
Jill Franklin

The GDPR Takes Open Source to the Next Level

2 months 2 weeks ago
by Glyn Moody

Richard Stallman will love the new GDPR.

It's not every day that a new law comes into force that will have major implications for digital industries around the globe. It's even rarer when a such law will also bolster free software's underlying philosophy. But the European Union's General Data Protection Regulation (GDPR), which will be enforced from May 25, 2018, does both of those things, making its appearance one of the most important events in the history of open source.

Free software is famously about freedom, not free beverages:

"Free software" means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, "free software" is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech," not as in "free beer".

Richard Stallman's great campaign to empower individuals by enabling them to choose software that is under their control has succeeded to the extent that anyone now can choose from among a wide range of free software programs and avoid proprietary lock-in. But a few years back, Stallman realized there was a new threat to freedom: cloud computing. As he told The Guardian in 2008:

One reason you should not use web applications to do your computing is that you lose control. It's just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else's web server, you're defenseless. You're putty in the hands of whoever developed that software.

Stallman pointed out that running a free software operating system—for example Google's ChromeOS—offered no protection against this loss of control. Nor does requiring the cloud computing service to use the GNU Affero GPL license solve the problem: just because users have access to the underlying code that is running on the servers does not mean they are in the driver's seat. The real problem lies not with the code, but elsewhere—with the data.

Running free software on your own computer, you obviously retain control of your own data. But that's not the case with cloud computing services—or, indeed, most online services, such as e-commerce sites or social networks. There, highly personal data about you is routinely held by the companies in question. Whether or not they run their servers on open-source code—as most now do—is irrelevant; what matters is that they control your data—and you don't.

The new GDPR changes all that. Just as free software seeks to empower individuals by giving them control over the code they run, so the GDPR empowers people by giving them the ability to control their personal data, wherever it is stored, and whichever company is processing it. The GDPR will have a massive impact on the entire online world because its reach is global, as this EU website on the subject explains:

The GDPR not only applies to organisations located within the EU but it will also apply to organisations located outside of the EU if they offer goods or services to, or monitor the behaviour of, EU data subjects. It applies to all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company's location.

And if you think that the internet giants based outside the EU will simply ignore the GDPR, think again: under the legislation, companies that fail to comply with the new regulation can be fined up to 4% of their global turnover, wherever they are based. Google's total turnover last year was $110 billion, which means that non-compliance could cost it $4.4 billion. Those kinds of figures guarantee that every business in the world that has dealings with EU citizens anywhere, in any way, will be fully implementing the GDPR. In effect, the GDPR will be a privacy law for the whole world, and the whole world will benefit. According to a report in the Financial Times last year, the top 500 companies in the US alone will spend $7.8 billion in order to meet the new rules (paywall). The recent scandal over Cambridge Analytica's massive collection of personal data using a Facebook app is likely to increase pressure globally on businesses to strengthen their protections for personal data for everyone, not just for EU citizens.

The GDPR's main features are as follows. Consent to data processing "must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it." Companies will no longer be able to hide bad privacy policies in long and incomprehensible terms and conditions. The purpose of the data processing must be clearly attached to the request for consent, and withdrawing consent must be as easy to do as giving it.

There are two important rights in the GDPR. The "right to access" means people are able to find out from an organization whether or not personal data concerning them is being processed, where and for what purpose. They must be given a copy of the personal data, free of charge, on request. That data must be in a "commonly used" and machine-readable format so that it can be easily transferred to another service. The other right is to data erasure, also known as the "right to be forgotten". This applies when data is no longer relevant to the original purposes for processing, or people have withdrawn their consent. However, that right is not absolute: the public interest in the availability of the data may mean that it is not deleted.

One of the innovations of the GDPR is that it embraces "privacy by design and default". That is, privacy must be built in to technology from the start and not added as an afterthought. In many ways, this mirrors free software's insistence that freedom must suffuse computer code, not be regarded as something that can be bolted on afterward. The original Privacy by Design framework explains what this will mean in practice:

Privacy must become integral to organizational priorities, project objectives, design processes, and planning operations. Privacy must be embedded into every standard, protocol and process that touches our lives.

Open-source projects are probably in a good position to make that happen, thanks to their transparent, flexible processes and feedback mechanisms. In addition, under the GDPR, computer security and encryption gain a heightened importance, not least because there are new requirements for "breach notifications". Both the relevant authorities and those affected must be informed rapidly of any breach. Again, open-source applications may have an advantage here thanks to the ready availability of the source code that can be examined for possible vulnerabilities. The new fines for those who fail to comply with the breach notifications—up to 2% of global turnover—could offer an additional incentive for companies to require open-source solutions so that they have the option to look for problems before they turn into expensive infractions of the GDPR.

It would be hard to overstate the importance of the GDPR, which will have global ramifications for both the privacy sector in particular and the digital world in general. Its impact on open source is more subtle, but no less profound. Although it was never intended as such, it will effectively address the key problem left unresolved by free software: how to endow users with the same kind of control that they enjoy over their own computers, when they use online services. As a result, May 25, 2018 should go down as the day when the freedom bestowed by open source went up a notch.

Privacy GDPR open source
Glyn Moody

May 2018 Issue: Privacy

2 months 3 weeks ago
by Carlie Fairchild

Most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Enter our latest issue with a deep dive into privacy.

After working on this issue, a few of us on the Linux Journal team walked away implementing some new privacy practices--we suspect you may too after you give it a read.

In This Issue:

  • Data Privacy: How to Protect Yourself
  • Effective Privacy Plugins
  • Using Tor Hidden Services
  • Interview: Andrew Lee on Open-Sourcing PIA
  • Review: Purism's Librem 13v2
  • Generating Good Passwords with a Shell Script
  • The GDPR and Open Source
  • Getting Started with Nextcloud 13
  • Examining Data with Pandas
  • FOSS Project Spotlights: Sawmill and CloudMapper
  • GitStorage Review
  • Visualizing Molecules with EasyChem

Subscribers, you can download your May issue now.

Not a subscriber? It’s not too late. Subscribe today and receive instant access to this and ALL back issues since 1994!

Want to buy a single issue? Buy the May magazine or other single back issues in the LJ store.

Privacy Tor Security Librem GDPR
Carlie Fairchild

Cooking with Linux without a Net: Let's Install Windows Subsystem for Linux (WSL) on Windows 10

2 months 3 weeks ago

Please support Linux Journal by subscribing or becoming a patron.

It's Tuesday and that means it's time for Cooking With Linux (without a net), sponsored and supported by Linux Journal to whom I am very thankful. Today, I'm installing the Windows Subsystem for Linux on a brand new Windows 10 tablet PC. And yes, I'll do it all live, without a net, and with a high probability of falling flat on my face. Join me today, at 12 noon, Eastern Time. Be part of the conversation.

video Windows
Marcel Gagné

Mozilla's New Privacy-Conscious Approach to Sponsored Content, Atari Announces Pre-Sale Date of Atari VCS, New Kali Linux Release and More

2 months 3 weeks ago

News briefs for May 1, 2018.

Mozilla announces its new privacy-conscious approach to sponsored content. Earlier this year Mozilla began experimenting with showing a sponsored story occasionally in Pocket. The company is preparing to go live with it later this month with the Firefox 60 release. Mozilla stresses that this new approach must not sacrifice user privacy: "All personalization happens on the client-side, without needing to vacuum up all of your personal data or sharing it with others." It also promises quality content, user control and transparency.

Atari announces that pre-sales of the Atari VCS will begin May 30, 2018 on Indiegogo, which will feature the time-limited Atari VCS Collector's Edition with the special retro-inspired wood front. In addition, there will be the option to pre-order an all-black Onyx edition for $199.

Kali today announces its Kali Linux 2018.2 release, the first to include the 4.15 kernel, which has the fixes for Spectre and Meltdown. It also includes "much better support for AMD GPUs and support for AMD Secure Encrypted Virtualization, which allows for encrypting virtual machine memory such that even the hypervisor can't access it."

System76 released Pop!_OS Linux 18.04 recently, which is based on Canonical's Ubuntu 18.04, Softpedia News reports. This new version features "a brand new installer, new power management features, firmware notifications, and proper HiDPI support".

Porteus recently announced the immediate availability of Porteus-v4.0 final, which comes in seven desktop flavors: KDE, Xfce, LXDE, LXQt, Cinnamon, MATE and Openbox. This release also includes a new update-browser feature, support for EFI and Intel microcode available in the boot folder, among other things.

News Mozilla Privacy gaming Security Distributions
Jill Franklin

Miner One Is Launching Its Bitcoin-Mining High-Altitude Balloon Today, New Stable Version of GIMP and More

2 months 3 weeks ago

News briefs for April 30, 2018.

Miner One announced via press release that it is launching its bitcoin-mining balloon today. You can watch the launch on Facebook. Space Miner One is a capsule and high-altitude balloon that will "perform data-mining operations at the edge of space". Miner One's goal is to "remind people that cryptocurrency is really about the future and the revolutionary technology at its heart: so-called blockchain technology."

After six years of development, a new stable version of GIMP has been released. Version 2.10.0 has a new default Dark theme and supports HIDPI displays and the GEGL image processing library. GIMP 2.10.0 also includes new tools, better file format support and an upgraded user interface, among other things. See the release notes for all the details.

The European Union wants online platforms to incorporate their own bot-detection mechanisms, TechCrunch reports. This is "as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply—by this summer—as part of a wider package of proposals it's put out which are generally aimed at tackling the problematic spread and impact of disinformation online."

Ubuntu "Budgie" announced its very first LTS version. This release features "more customisation options via budgie welcome, lots more Budgie Applets available to be installed, dynamic Workspaces, hot-corners and Window Shuffler" and more. See the release notes for more info, and go here to download.

Linux Mint published its monthly update this morning. The report details what the Mint team is working on, including adapting to the Debian Stretch and Ubuntu Bionic package bases, finalizing Cinnamon 3.8, adding new features to Mint tools, working on documentation, plans for Mint 19 and LMDE 3 and more.

News Bitcoin Blockchain GIMP Linux Mint Ubuntu Distributions
Jill Franklin

Working around Intel Hardware Flaws

2 months 3 weeks ago
Working around Intel Hardware Flaws Image Zack Brown Mon, 04/30/2018 - 07:07 kernel Hardware Spectre Meltdown Intel

Efforts to work around serious hardware flaws in Intel chips are ongoing. Nadav Amit posted a patch to improve compatibility mode with respect to Intel's Meltdown flaw. Compatibility mode is when the system emulates an older CPU in order to provide a runtime environment that supports an older piece of software that relies on the features of that CPU. The thing to be avoided is to emulate massive security holes created by hardware flaws in that older chip as well.

In this case, Linux is already protected from Meltdown by use of PTI (page table isolation), a patch that went into Linux 4.15 and that was subsequently backported all over the place. However, like the BKL (big kernel lock) in the old days, PTI is a heavy-weight solution, with a big impact on system speed. Any chance to disable it without reintroducing security holes is a chance worth exploring.

Nadav's patch was an attempt to do this. The goal was "to disable PTI selectively as long as x86-32 processes are running and to enable global pages throughout this time."

One problem that Nadav acknowledged was that since so many developers were actively working on anti-Meltdown and anti-Spectre patches, there was plenty of opportunity for one patch to step all over what another was trying to do. As a result, he said, "the patches are marked as an RFC since they (specifically the last one) do not coexist with Dave Hansen's enabling of global pages, and might have conflicts with Joerg's work on 32-bit."

Andrew Cooper remarked, chillingly:

Being 32bit is itself sufficient protection against Meltdown (as long as there is nothing interesting of the kernel's mapped below the 4G boundary). However, a 32bit compatibility process may try to attack with Spectre/SP2 to redirect speculation back into userspace, at which point (if successful) the pipeline will be speculating in 64bit mode, and Meltdown is back on the table. SMEP will block this attack vector, irrespective of other SP2 defenses the kernel may employ, but a fully SP2-defended kernel doesn't require SMEP to be safe in this case.

And Dave, nearby, remarked, "regardless of Meltdown/Spectre, SMEP is valuable. It's valuable to everything, compatibility-mode or not."

SMEP (Supervisor Mode Execution Protection) is a hardware mode, whereby the OS can set a register on compatible CPUs to prevent userspace code from running. Only code that already has root permissions can run when SMEP is activated.

Andy Lutomirski said that he didn't like Nadav's patch because he said it drew a distinction between "compatibility mode" tasks and "non-compatibility mode" tasks. Andy said no such distinction should be made, especially since it's not really clear how to make that distinction, and because the ramifications of getting it wrong might be to expose significant security holes.

Andy felt that a better solution would be to enable and disable 32-bit mode and 64-bit mode explicitly as needed, rather than guessing at what might or might not be compatibility mode.

The drawback to this approach, Andy said, was that old software would need to be upgraded to take advantage of it, whereas with Nadav's approach, the judgment would be made automatically and would not require old code to be updated.

Linus Torvalds was not optimistic about any of these ideas. He said, "I just feel this all is a nightmare. I can see how you would want to think that compatibility mode doesn't need PTI, but at the same time it feels like a really risky move to do this." He added, "I'm not seeing how you keep user mode from going from compatibility mode to L mode with just a far jump."

In other words, the whole patch, and any alternative, may just simply be a bad idea.

Nadav replied that with his patch, he tried to cover every conceivable case where someone might try to break out of compatibility mode and to re-enable PTI protections if that were to happen. Though he did acknowledge, "There is one corner case I did not cover (LAR) and Andy felt this scheme is too complicated. Unfortunately, I don't have a better scheme in mind."

Linus remarked:

Sure, I can see it working, but it's some really shady stuff, and now the scheduler needs to save/restore/check one more subtle bit.

And if you get it wrong, things will happily work, except you've now defeated PTI. But you'll never notice, because you won't be testing for it, and the only people who will are the black hats.

This is exactly the "security depends on it being in sync" thing that makes me go "eww" about the whole model. Get one thing wrong, and you'll blow all the PTI code out of the water.

So now you tried to optimize one small case that most people won't use, but the downside is that you may make all our PTI work (and all the overhead for all the _normal_ cases) pointless.

And Andy also remarked, "There's also the fact that, if this stuff goes in, we'll be encouraging people to deploy 32-bit binaries. Then they'll buy Meltdown-fixed CPUs (or AMD CPUs!) and they may well continue running 32-bit binaries. Sigh. I'm not totally a fan of this."

The whole thread ended inconclusively, with Nadav unsure whether folks wanted a new version of his patch.

The bottom line seems to be that Linux has currently protected itself from Intel's hardware flaws, but at a cost of perhaps 5% to 30% efficiency (the real numbers depend on how you use your system). And although it will be complex and painful, there is a very strong incentive to improve efficiency by adding subtler and more complicated workarounds that avoid the heavy-handed approach of the PTI patch. Ultimately, Linux will certainly develop a smooth, near-optimal approach to Meltdown and Spectre, and probably do away with PTI entirely, just as it did away with the BKL in the past. Until then, we're in for some very ugly and controversial patches.

Note: If you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Zack Brown

Weekend Reading: Privacy

2 months 3 weeks ago
Weekend Reading: Privacy Image Carlie Fairchild Sat, 04/28/2018 - 08:59 Privacy Security Tor Facebook

Most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Enter this weekend's reading topic: Privacy.

The Wire by Shawn Powers

In the US, there has been recent concern over ISPs turning over logs to the government. During the past few years, the idea of people snooping on our private data (by governments and others) really has made encryption more popular than ever before. One of the problems with encryption, however, is that it's generally not user-friendly to add its protection to your conversations. Thankfully, messaging services are starting to take notice of the demand. For me, I need a messaging service that works across multiple platforms, encrypts automatically, supports group messaging and ideally can handle audio/video as well. Thankfully, I found an incredible open-source package that ticks all my boxes: Wire.

Facebook Compartmentalization by Kyle Rankin

Whenever people talk about protecting privacy on the internet, social-media sites like Facebook inevitably come up—especially right now. It makes sense—social networks (like Facebook) provide a platform where you can share your personal data with your friends, and it doesn't come as much of a surprise to people to find out they also share that data with advertisers (it's how they pay the bills after all). It makes sense that Facebook uses data you provide when you visit that site. What some people might be surprised to know, however, is just how much. Facebook tracks them when they aren't using Facebook itself but just browsing around the web.

Some readers may solve the problem of Facebook tracking by saying "just don't use Facebook"; however, for many people, that site may be the only way they can keep in touch with some of their friends and family members. Although I don't post on Facebook much myself, I do have an account and use it to keep in touch with certain friends. So in this article, I explain how I employ compartmentalization principles to use Facebook without leaking too much other information about myself.

Protection, Privacy and Playoffs by Shawn Powers

I'm not generally a privacy nut when it comes to my digital life. That's not really a good thing, as I think privacy is important, but it often can be very inconvenient. For example, if you strolled into my home office, you'd find I don't password-protect my screensaver. Again, it's not because I want to invite snoops, but rather it's just a pain to type in my password every time I come back from going to get a cup of tea. (Note: when I worked in a traditional office environment, I did lock my screen. I'm sure working from a home office is why I'm comfortable with lax security.)

A Machine for Keeping Secrets? by Vinay Gupta

The most important thing that the British War Office learned about cryptography was how to keep a secret: Enigma was broken at Bletchley Park early enough in World War II to change the course of the war—and of history. Now here's the thing: only if the breakthrough (called Ultra, which gives you a sense of its importance) was secret could Enigma's compromise be used to defeat the Nazis. Breaking Enigma was literally the "zero-day" that brought down an empire. Zero-day is a bug known only to an attacker. Defenders (those creating/protecting the software) have never seen the exploit and are, therefore, largely powerless to respond until they have done analysis. The longer the zero-day is kept secret, and its use undiscovered, the longer it represents absolute power.

Own Your DNS Data by Kyle Rankin

I honestly think most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Even if they have some inkling along those lines, I still imagine many think of the data they leak only in terms of individual facts, such as their name or where they ate lunch. What many people don't realize is how revealing all of those individual, innocent facts are when they are combined, filtered and analyzed.

Cell-phone metadata (who you called, who called you, the length of the call and what time the call happened) falls under this category, as do all of the search queries you enter on the Internet.

For this article, I discuss a common but often overlooked source of data that is far too revealing: your DNS data.

Tor Security for Android and Desktop Linux by Charles Fisher

The Tor Project presents an effective countermeasure against hostile and disingenuous carriers and ISPs that, on a properly rooted and capable Android device or Linux system, can force all network traffic through Tor encrypted entry points (guard nodes) with custom rules for iptables. This action renders all device network activity opaque to the upstream carrier—barring exceptional intervention, all efforts to track a user are afterward futile.

Carlie Fairchild

Randomly Switching Upper and Lowercase in a Shell Script

2 months 3 weeks ago
Randomly Switching Upper and Lowercase in a Shell Script Image Dave Taylor Fri, 04/27/2018 - 16:49 HOW-TOs Programming Shell Scripting

Dave wraps up the shell-script L33t generator

Last time, I talked about what's known informally as l33t-speak, a series of letter and letter-pair substitutions that marks the jargon of the hacker elite (or some subset of hacker elite, because I'm pretty sure that real computer security experts don't need to substitute vowels with digits to sound cool and hip).

Still, it was an interesting exercise as a shell-scripting problem, because it's surprisingly simply to adapt a set of conversion rules into a sequence of commands. I sidestepped one piece of it, however, and that's what I want to poke around with this article: changing uppercase and lowercase letters somewhat randomly.

This is where "Linux Journal" might become "LiNUx jOurNAl", for example. Why? Uhm, because it's a puzzle to solve. Jeez, you ask such goofy questions of me!

Breaking Down a Line Letter by Letter

The first and perhaps most difficult task is to take a line of input and break it down letter by letter so each can be analyzed and randomly transliterated. There are lots of ways to accomplish this in Linux (of course), but I'm going to use the built-in Bash substring variable reference sequence. It looks like this:

${variable:index:length}

So to get just the ninth character of variable input, for example, I could use ${input:9:1}. Bash also has another handy variable reference that produces the length of the value of a particular variable: ${#variable}. Put the two together, and here's the basic initialization and loop:

input="$*" length="${#input}" while [ $charindex -lt $length ] do char="${input:$charindex:1}" # conversion occurs here newstring="${newstring}$char" charindex=$(( $charindex + 1 )) done

Keep in mind that charindex is initialized to 0, and newstring is initialized to "", so you can see how this quickly steps through every character, adding it to newstring. "Conversion occurs here" is not very exciting, but that's the placeholder you need.

Lower, Meet Upper, and Vice Versa

Last time I also showed a quick and easy way to choose a number 1–10 randomly, so you can sometimes have something happen and other times not happen. In this command:

doit=$(( $RANDOM % 10 )) # random virtual coin flip

Let's say there's only a 30% chance that an uppercase letter will convert to lowercase, but a 50% chance that a lowercase letter will become uppercase. How do you code that? To start, let's get the basic tests:

if [ -z "$(echo "$char" | sed -E 's/[[:lower:]]//')" ] then # it's a lowercase character elif [ -z "$(echo "$char" | sed -E 's/[[:upper:]]//')" ] then # it's uppercase fi

This is a classic shell-script trick: to ascertain if a character is a member of a class, replace it with null, then test to see if the resultant string is null (the -Z test).

The last bit's easy. Generate the random number, then if it's below the threshold, transliterate the char; otherwise, do nothing. Thus:

if [ -z "$(echo "$char" | sed -E 's/[[:lower:]]//')" ] then # lowercase. 50% chance we'll change it if [ $doit -lt 5 ] ; then char="$(echo $char | tr '[[:lower:]]' '[[:upper:]]')" fi elif [ -z "$(echo "$char" | sed -E 's/[[:upper:]]//')" ] then # uppercase. 30% chance we'll change it if [ $doit -lt 3 ] ; then char="$(echo $char | tr '[[:upper:]]' '[[:lower:]]')" fi fi

Put it all together and you have this Frankenstein's monster of a script:

$ sh changecase.sh Linux Journal is a great read. LiNuX JoURNal is a GrEaT ReAd. $ !! LINuX journAl IS a gREat rEAd $

Now you're ready for writing some ransom notes, it appears!

Dave Taylor

Mozilla's New Mixed Reality Hubs, NanoPi K1 Plus, Wireshark Update and More

2 months 3 weeks ago
News Mozilla Mixed Reality Raspberry Pi Embedded Networking Fedora Red Hat Distributions

News briefs for April 27, 2018.

Mozilla announced yesterday a preview release of Hubs, a "new way to get together online within Mixed Reality, right in your browser." This preview lets you easily create a "room" online with a click, and you then can meet with others by sharing the link. When they open the link, they enter your room as avatars. Mozilla notes that this is "All with no app downloads, walled gardens, or content gatekeepers, and on any device you wish—and most importantly, through open source software that respects your privacy and is built on web standards."

The NanoPi K1 Plus from FriendlyElec is a new Raspberry Pi competitor. The NanoPi K1 Plus costs $35 and shares the same form factor as the RPi 3, but it has double the RAM, Gigabit Ethernet and 4K video playback. See the wiki for more details. (Source: ZDNet.)

Wireshark, the popular network protocol analyzer, just released version 2.6. This release brings many new or significantly updated features since version 2.5, including support for HTTP Request sequences, support for MaxMind DB files and much more. Download Wireshark from here.

Fedora announced that it now has a "curated set of third-party repositories" containing software that's not normally available in Fedora, such as Google Chrome, PyCharm and Steam. Fedora usually includes only free and open-source software, but with this new third-party repository, users can "opt-in" to these select extras.

Red Hat Enterprise Linux 6.10 beta is now available. According to the release announcement, "6.10 Beta is designed to support the next generation of cloud-native applications through an updated Red Hat Enterprise Linux 6 base image. The Red Hat Enterprise Linux 6.10 Beta base image enables customers to migrate their existing Red Hat Enterprise Linux 6 workloads into container-based applications - suitable for deployment on Red Hat Enterprise Linux 7, Red Hat Enterprise Linux Atomic Host, and Red Hat OpenShift Container Platform." See a full list of the changes here.

Jill Franklin

A Good Front End for R

2 months 3 weeks ago
A Good Front End for R Image Joey Bernard Thu, 04/26/2018 - 09:30 Data Analysis R Science

R is the de facto statistical package in the Open Source world. It's also quickly becoming the default data-analysis tool in many scientific disciplines.

R's core design includes a central processing engine that runs your code, with a very simple interface to the outside world. This basic interface means it's been easy to build graphical interfaces that wrap the core portion of R, so lots of options exist that you can use as a GUI.

In this article, I look at one of the available GUIs: RStudio. RStudio is a commercial program, with a free community version, available for Linux, Mac OSX and Windows, so your data analysis work should port easily regardless of environment.

For Linux, you can install the main RStudio package from the download page. From there, you can download RPM files for Red Hat-based distributions or DEB files for Debian-based distributions, then use either rpm or dpkg to do the installation.

For example, in Debian-based distributions, use the following to install RStudio:

sudo dpkg -i rstudio-xenial-1.1.423-amd64.deb

It's important to note that RStudio is only the GUI interface. This means you need to install R itself as a separate step. Install the core parts of R with:

sudo apt-get install r-base

There's also a community repository of available packages, called CRAN, that can add huge amounts of functionality to R. You'll want to install at least some of them in order to have some common tools to use:

sudo apt-get install r-recommended

There are equivalent commands for RPM-based distributions too.

At this point, you should have a complete system to do some data analysis.

When you first start RStudio, you'll see a window that looks somewhat like Figure 1.

Figure 1. RStudio creates a new session, including a console interface to R, where you can start your work.

The main pane of the window, on the left-hand side, provides a console interface where you can interact directly with the R session that's running in the back end.

The right-hand side is divided into two sections, where each section has multiple tabs. The default tab in the top section is an environment pane. Here, you'll see all the objects that have been created and exist within the current R session.

The other two tabs provide the history of every command given and a list of any connections to external data sources.

The bottom pane has five tabs available. The default tab gives you a file listing of the current working directory. The second tab provides a plot window where any data plots you generate are displayed. The third tab provides a nicely ordered view into R's library system. It shows a list of all of the currently installed libraries, along with tools to manage updates and install new libraries. The fourth tab is the help viewer. R includes a very complete and robust help system modeled on Linux man pages. The last tab is a general "viewer" pane to view other types of objects.

One part of RStudio that's a great help to people managing multiple areas of research is the ability to use projects. Clicking the menu item File→New Project pops up a window where you can select how your new project will exist on the filesystem.

Figure 2. When you create a new project, it can be created in a new directory, an existing directory or be checked out from a code repository.

As an example, let's create a new project hosted in a local directory. The file display in the bottom-right pane changes to the new directory, and you should see a new file named after the project name, with the filename ending .Rproj. This file contains the configuration for your new project. Although you can interact with the R session directly through the console, doing so doesn't really lead to easily reproduced workflows. A better solution, especially within a project, is to open a script editor and write your code within a script file. This way you automatically have a starting point when you move beyond the development phase of your research.

When you click File→New File→R Script, a new pane opens in the top left-hand side of the window.

Figure 3. The script editor allows you to construct more complicated pieces of code than is possible using just the console interface.

From here, you can write your R code with all the standard tools you'd expect in a code editor. To execute this code, you have two options. The first is simply to click the run button in the top right of this editor pane. This will run either the single line where the cursor is located or an entire block of code that previously had been highlighted.

Figure 4. You can enter code in the script editor and then have them run to make code development and data analysis a bit easier on your brain.

If you have an entire script file that you want to run as a whole, you can click the source button in the top right of the editor pane. This lets you reproduce analysis that was done at an earlier time.

The last item to mention is data visualization in RStudio. Actually, the data visualization is handled by other libraries within R. There is a very complete, and complex, graphics ability within the core of R. For normal humans, several libraries are built on top of this. One of the most popular, and for good reason, is ggplot. If it isn't already installed on your system, you can get it with:

install.packages(c('ggplot2'))

Once it's installed, you can make a simple scatter plot with this:

library(ggplot2) c <- data.frame(x=a, y=b) ggplot(c, aes(x=x, y=y)) + geom_point()

As you can see, ggplot takes dataframes as the data to plot, and you control the display with aes() function calls and geom function calls. In this case, I used the geom_point() function to get a scatter plot of points. The plot then is generated in the bottom-left pane.

Figure 5. ggplot2 is one of the most powerful and popular graphing tools available in the R environment.

There's a lot more functionality available in RStudio, including a server portion that can be run on a cluster, allowing you to develop code locally and then send it off to a server for the actual processing.

Joey Bernard

Official Ubuntu 18.04 LTS Release, Gmail Redesign, New Cinnamon 3.8 Desktop and More

2 months 3 weeks ago
News Distributions Google Ubuntu Desktop Amazon

News briefs for April 26, 2018.

Ubuntu 18.04 "Bionic Beaver" LTS is scheduled to be released officially today. This release features major changes, including kernel version updated to 4.15, GNOME instead of Unity, Python 2 no longer installed by default and much more. According to the release announcement "Ubuntu Server 18.04 LTS includes the Queens release of OpenStack including the clustering enabled LXD 3.0, new network configuration via netplan.io, and a next-generation fast server installer. See the Release Notes for download links.

Google's new Gmail redesign launched yesterday, with several new privacy features including a new "confidential mode", which allows users to set an expiration date for private email, and "integrated rights management", which lets users block forwarding, copying, downloading or printing certain messages. See the story on The Verge for more information.

openSUSE Tumbleweed had four snapshots released this week, including new updates for the kernel, Mesa, KDE Frameworks and a major version update of libglvnd. See the post on openSUSE for all the details.

The Cinnamon 3.8 desktop has been released and is already available in some repositories, including Arch Linux. Cinnamon 3.8, which is scheduled to ship with Linux Mint 19 "Tara" later this summer, this release "brings numerous improvements, new features, and lots of Python 3 ports for a bunch of components." (Source: Softpedia News.)

Attackers subverted Amazon's domain-resolution service on Tuesday, and according to an Ars Technica report, "masqueraded as cryptocurrency website MyEtherWallet.com and stole about $150,000 in digital coins from unwitting end users. They may have targeted other Amazon customers as well."

Jill Franklin

ONNX: the Open Neural Network Exchange Format

2 months 3 weeks ago
ONNX: the Open Neural Network Exchange Format Image Braddock Gaskill Wed, 04/25/2018 - 09:19 Deep Learning HPC Machine Learning ONNX open source python

An open-source battle is being waged for the soul of artificial intelligence. It is being fought by industry titans, universities and communities of machine-learning researchers world-wide. This article chronicles one small skirmish in that fight: a standardized file format for neural networks. At stake is the open exchange of data among a multitude of tools instead of competing monolithic frameworks.

The good news is that the battleground is Free and Open. None of the big players are pushing closed-source solutions. Whether it is Keras and Tensorflow backed by Google, MXNet by Apache endorsed by Amazon, or Caffe2 or PyTorch supported by Facebook, all solutions are open-source software.

Unfortunately, while these projects are open, they are not interoperable. Each framework constitutes a complete stack that until recently could not interface in any way with any other framework. A new industry-backed standard, the Open Neural Network Exchange format, could change that.

Now, imagine a world where you can train a neural network in Keras, run the trained model through the NNVM optimizing compiler and deploy it to production on MXNet. And imagine that is just one of countless combinations of interoperable deep learning tools, including visualizations, performance profilers and optimizers. Researchers and DevOps no longer need to compromise on a single toolchain that provides a mediocre modeling environment and so-so deployment performance.

What is required is a standardized format that can express any machine-learning model and store trained parameters and weights, readable and writable by a suite of independently developed software.

Enter the Open Neural Network Exchange Format (ONNX).

The Vision

To understand the drastic need for interoperability with a standard like ONNX, we first must understand the ridiculous requirements we have for existing monolithic frameworks.

A casual user of a deep learning framework may think of it as a language for specifying a neural network. For example, I want 100 input neurons, three fully connected layers each with 50 ReLU outputs, and a softmax on the output. My framework of choice has a domain language to specify this (like Caffe) or bindings to a language like Python with a clear API.

However, the specification of the network architecture is only the tip of the iceberg. Once a network structure is defined, the framework still has a great deal of complex work to do to make it run on your CPU or GPU cluster.

Python, obviously, doesn't run on a GPU. To make your network definition run on a GPU, it needs to be compiled into code for the CUDA (NVIDIA) or OpenCL (AMD and Intel) APIs or processed in an efficient way if running on a CPU. This compilation is complex and why most frameworks don't support both NVIDIA and AMD GPU back ends.

The job is still not complete though. Your framework also has to balance resource allocation and parallelism for the hardware you are using. Are you running on a Titan X card with more than 3,000 compute cores, or a GTX 1060 with far less than half as many? Does your card have 16GB of RAM or only 4? All of this affects how the computations must be optimized and run.

And still it gets worse. Do you have a cluster of 50 multi-GPU machines on which to train your network? Your framework needs to handle that too. Network protocols, efficient allocation, parameter sharing—how much can you ask of a single framework?

Now you say you want to deploy to production? You wish to scale your cluster automatically? You want a solid language with secure APIs?

When you add it all up, it seems absolutely insane to ask one monolithic project to handle all of those requirements. You cannot expect the authors who write the perfect network definition language to be the same authors who integrate deployment systems in Kubernetes or write optimal CUDA compilers.

The goal of ONNX is to break up the monolithic frameworks. Let an ecosystem of contributors develop each of these components, glued together by a common specification format.

The Ecosystem (and Politics)

Interoperability is a healthy sign of an open ecosystem. Unfortunately, until recently, it did not exist for deep learning. Every framework had its own format for storing computation graphs and trained models.

Late last year that started to change. The Open Neural Network Exchange format initiative was launched by Facebook, Amazon and Microsoft, with support from AMD, ARM, IBM, Intel, Huawei, NVIDIA and Qualcomm. Let me rephrase that as everyone but Google. The format has been included in most well known frameworks except Google's TensorFlow (for which a third-party converter exists).

This seems to be the classic scenario where the clear market leader, Google, has little interest in upending its dominance for the sake of openness. The smaller players are banding together to counter the 500-pound gorilla.

Google is committed to its own TensorFlow model and weight file format, SavedModel, which shares much of the functionality of ONNX. Google is building its own ecosystem around that format, including TensorFlow Server, Estimator and Tensor2Tensor to name a few.

The ONNX Solution

Building a single file format that can express all of the capabilities of all the deep learning frameworks is no trivial feat. How do you describe convolutions or recurrent networks with memory? Attention mechanisms? Dropout layers? What about embeddings and nearest neighbor algorithms found in fastText or StarSpace?

ONNX cribs a note from TensorFlow and declares everything is a graph of tensor operations. That statement alone is not sufficient, however. Dozens, perhaps hundreds, of operations must be supported, not all of which will be supported by all other tools and frameworks. Some frameworks may also implement an operation differently from their brethren.

There has been considerable debate in the ONNX community about what level tensor operations should be modeled at. Should ONNX be a mathematical toolbox that can support arbitrary equations with primitives such as sine and multiplication, or should it support higher-level constructs like integrated GRU units or Layer Normalization as single monolithic operations?

As it stands, ONNX currently defines about 100 operations. They range in complexity from arithmetic addition to a complete Long Short-Term Memory implementation. Not all tools support all operations, so just because you can generate an ONNX file of your model does not mean it will run anywhere.

Generation of an ONNX model file also can be awkward in some frameworks because it relies on a rigid definition of the order of operations in a graph structure. For example, PyTorch boasts a very pythonic imperative experience when defining models. You can use Python logic to lay out your model's flow, but you do not define a rigid graph structure as in other frameworks like TensorFlow. So there is no graph of operations to save; you actually have to run the model and trace the operations. The trace of operations is saved to the ONNX file.

Conclusion

It is early days for deep learning interoperability. Most users still pick a framework and stick with it. And an increasing number of users are going with TensorFlow. Google throws many resources and real-world production experience at it—it is hard to resist.

All frameworks are strong in some areas and weak in others. Every new framework must re-implement the full "stack" of functionality. Break up the stack, and you can play to the strengths of individual tools. That will lead to a healthier ecosystem.

ONNX is a step in the right direction.

Note: the ONNX GitHub page is here.

Disclaimer

Braddock Gaskill is a research scientist with eBay Inc. He contributed to this article in his personal capacity. The views expressed are his own and do not necessarily represent the views of eBay Inc.

About the Author

Braddock Gaskill has 25 years of experience in AI and algorithmic software development. He also co-founded the Internet-in-a-Box open-source project and developed the libre Humane Wikipedia Reader for getting content to students in the developing world.

Braddock Gaskill

Purism Partners with UBports to Offer Ubuntu Touch on the Librem 5, Red Hat Storage One Launches and More

2 months 3 weeks ago
News Purism Mobile Red Hat Storage Community

News briefs for April 25, 2018.

Purism has partnered with UBports to offer Ubuntu Touch on its Librem 5 smartphone. By default, the smartphone runs Purism's PureOS, which supports GNOME and KDE Plasma mobile interfaces. UBports is ensuring Ubuntu Touch will run on the phones as well, so the Librem 5 can "now offer users three fully free and open mobile operating system options".

Red Hat today announced the general availability of Red Hat Storage One, a "new approach to software-defined storage aimed at providing pre-configured, workload optimized systems on a number of hardware choices." Features include ease of installation, workload and hardware optimization, flexible scalability and cost-effectiveness.

Microsoft yesterday announced vcpkg, a single C++ library manager for Linux, macOS and Windows: "This gives you immediate access to the vcpkg catalog of C++ libraries on two new platforms, with the same simple steps you are familiar with on Windows and UWP today."

The Linux Foundation and Dice are looking for people to take their open-source job surveys: "this is your chance to let companies, HR and hiring managers and industry organizations know what motivates you as an open source professional." There two surveys, one for open-source professionals and one specifically for hiring managers.

Outreachy, which provides three-month internships for people from groups traditionally underrepresented in tech, has announced its accepted summer interns. Out of 1,264 applicants, 44 were chosen. Here's a list of all the interns and their projects. If you're interested in participating, the next round of Outreachy internships opens in September 2018 for the December 2018 to March 2019 internship round.

Jill Franklin

Lying with statistics, distributions, and popularity contests on Cooking With Linux (without a net)

2 months 4 weeks ago

Please support Linux Journal by subscribing or becoming a patron.

It's Tuesday and that means it's time for Cooking With Linux (without a net), sponsored and supported by Linux Journal. Today, I'm courting controversy by discussing numbers, OS popularity, and how to pick the right Linux distribution if you want to be where are the beautiful people hang out. And yes, I'll do it all live, without a net, and with a high probability of falling flat on my face.

Cooking with Linux Distributions
Marcel Gagné

Blockchain, Part II: Configuring a Blockchain Network and Leveraging the Technology

2 months 4 weeks ago
Blockchain, Part II: Configuring a Blockchain Network and Leveraging the Technology Image Petros Koutoupis Tue, 04/24/2018 - 11:30 Blockchain HOW-TOs Cryptocurrency Cryptominig

How to set up a private ethereum blockchain using open-source tools and a look at some markets and industries where blockchain technologies can add value.

In Part I, I spent quite a bit of time exploring cryptocurrency and the mechanism that makes it possible: the blockchain. I covered details on how the blockchain works and why it is so secure and powerful. In this second part, I describe how to set up and configure your very own private ethereum blockchain using open-source tools. I also look at where this technology can bring some value or help redefine how people transact across a more open web.

Setting Up Your Very Own Private Blockchain Network

In this section, I explore the mechanics of an ethereum-based blockchain network—specifically, how to create a private ethereum blockchain, a private network to host and share this blockchain, an account, and then how to do some interesting things with the blockchain.

What is ethereum, again? Ethereum is an open-source and public blockchain platform featuring smart contract (that is, scripting) functionality. It is similar to bitcoin but differs in that it extends beyond monetary transactions.

Smart contracts are written in programming languages, such as Solidity (similar to C and JavaScript), Serpent (similar to Python), LLL (a Lisp-like language) and Mutan (Go-based). Smart contracts are compiled into EVM (see below) bytecode and deployed across the ethereum blockchain for execution. Smart contracts help in the exchange of money, property, shares or anything of value, and it does so in a transparent and conflict-free way avoiding the traditional middleman.

If you recall from Part I, a typical layout for any blockchain is one where all nodes are connected to every other node, creating a mesh. In the world of ethereum, these nodes are referred to as Ethereum Virtual Machines (EVMs), and each EVM will host a copy of the entire blockchain. Each EVM also will compete to mine the next block or validate a transaction. Once the new block is appended to the blockchain, the updates are propagated to the entire network, so that each node is synchronized.

In order to become an EVM node on an ethereum network, you'll need to download and install the proper software. To accomplish this, you'll be using Geth (Go Ethereum). Geth is the official Go implementation of the ethereum protocol. It is one of three such implementations; the other two are written in C++ and Python. These open-source software packages are licensed under the GNU Lesser General Public License (LGPL) version 3. The standalone Geth client packages for all supported operating systems and architectures, including Linux, are available here. The source code for the package is hosted on GitHub.

Geth is a command-line interface (CLI) tool that's used to communicate with the ethereum network. It's designed to act as a link between your computer and all other nodes across the ethereum network. When a block is being mined by another node on the network, your Geth installation will be notified of the update and then pass the information along to update your local copy of the blockchain. With the Geth utility, you'll be able to mine ether (similar to bitcoin but the cryptocurrency of the ethereum network), transfer funds between two addresses, create smart contracts and more.

Download and Installation

In my examples here, I'm configuring this ethereum blockchain on the latest LTS release of Ubuntu. Note that the tools themselves are not restricted to this distribution or release.

Downloading and Installing the Binary from the Project Website

Download the latest stable release, extract it and copy it to a proper directory:

$ wget https://gethstore.blob.core.windows.net/builds/ ↪geth-linux-amd64-1.7.3-4bb3c89d.tar.gz $ tar xzf geth-linux-amd64-1.7.3-4bb3c89d.tar.gz $ cd geth-linux-amd64-1.7.3-4bb3c89d/ $ sudo cp geth /usr/bin/

Building from Source Code

If you are building from source code, you need to install both Go and C compilers:

$ sudo apt-get install -y build-essential golang

Change into the directory and do:

$ make geth

Installing from a Public Repository

If you are running on Ubuntu and decide to install the package from a public repository, run the following commands:

$ sudo apt-get install software-properties-common $ sudo add-apt-repository -y ppa:ethereum/ethereum $ sudo apt-get update $ sudo apt-get install ethereum Getting Started

Here is the thing, you don't have any ether to start with. With that in mind, let's limit this deployment to a "private" blockchain network that will sort of run as a development or staging version of the main ethereum network. From a functionality standpoint, this private network will be identical to the main blockchain, with the exception that all transactions and smart contracts deployed on this network will be accessible only to the nodes connected in this private network. Geth will aid in this private or "testnet" setup. Using the tool, you'll be able to do everything the ethereum platform advertises, without needing real ether.

Remember, the blockchain is nothing more than a digital and public ledger preserving transactions in their chronological order. When new transactions are verified and configured into a block, the block is then appended to the chain, which is then distributed across the network. Every node on that network will update its local copy of the chain to the latest copy. But you need to start from some point—a beginning or a genesis. Every blockchain starts with a genesis block, that is, a block "zero" or the very first block of the chain. It will be the only block without a predecessor. To create your private blockchain, you need to create this genesis block. To do this, you need to create a custom genesis file and then tell Geth to use that file to create your own genesis block.

Create a directory path to host all of your ethereum-related data and configurations and change into the config subdirectory:

$ mkdir ~/eth-evm $ cd ~/eth-evm $ mkdir config data $ cd config

Open your preferred text editor and save the following contents to a file named Genesis.json in that same directory:

{ "config": { "chainId": 999, "homesteadBlock": 0, "eip155Block": 0, "eip158Block": 0 }, "difficulty": "0x400", "gasLimit": "0x8000000", "alloc": {} }

This is what your genesis file will look like. This simple JSON-formatted string describes the following:

  • config — this block defines the settings for your custom chain.

  • chainId — this identifies your Blockchain, and because the main ethereum network has its own, you need to configure your own unique value for your private chain.

  • homesteadBlock — defines the version and protocol of the ethereum platform.

  • eip155Block / eip158Block — these fields add support for non-backward-compatible protocol changes to the Homestead version used. For the purposes of this example, you won't be leveraging these, so they are set to "0".

  • difficulty — this value controls block generation time of the blockchain. The higher the value, the more calculations a miner must perform to discover a valid block. Because this example is simply deploying a test network, let's keep this value low to reduce wait times.

  • gasLimit — gas is ethereum's fuel spent during transactions. As you do not want to be limited in your tests, keep this value high.

  • alloc — this section prefunds accounts, but because you'll be mining your ether locally, you don't need this option.

Now it's time to instantiate the data directory. Open a terminal window, and assuming you have the Geth binary installed and that it's accessible via your working path, type the following:

$ geth --datadir /home/petros/eth-evm/data/PrivateBlockchain ↪init /home/petros/eth-evm/config/Genesis.json WARN [02-10|15:11:41] No etherbase set and no accounts found ↪as default INFO [02-10|15:11:41] Allocated cache and file handles ↪database=/home/petros/eth-evm/data/PrivateBlockchain/ ↪geth/chaindata cache=16 handles=16 INFO [02-10|15:11:41] Writing custom genesis block INFO [02-10|15:11:41] Successfully wrote genesis state ↪database=chaindata hash=d1a12d...4c8725 INFO [02-10|15:11:41] Allocated cache and file handles ↪database=/home/petros/eth-evm/data/PrivateBlockchain/ ↪geth/lightchaindata cache=16 handles=16 INFO [02-10|15:11:41] Writing custom genesis block INFO [02-10|15:11:41] Successfully wrote genesis state ↪database=lightchaindata

The command will need to reference a working data directory to store your private chain data. Here, I have specified eth-evm/data/PrivateBlockchain subdirectories in my home directory. You'll also need to tell the utility to initialize using your genesis file.

This command populates your data directory with a tree of subdirectories and files:

$ ls -R /home/petros/eth-evm/ .: config data ./config: Genesis.json ./data: PrivateBlockchain ./data/PrivateBlockchain: geth keystore ./data/PrivateBlockchain/geth: chaindata lightchaindata LOCK nodekey nodes transactions.rlp ./data/PrivateBlockchain/geth/chaindata: 000002.ldb 000003.log CURRENT LOCK LOG MANIFEST-000004 ./data/PrivateBlockchain/geth/lightchaindata: 000001.log CURRENT LOCK LOG MANIFEST-000000 ./data/PrivateBlockchain/geth/nodes: 000001.log CURRENT LOCK LOG MANIFEST-000000 ./data/PrivateBlockchain/keystore:

Your private blockchain is now created. The next step involves starting the private network that will allow you to mine new blocks and have them added to your blockchain. To do this, type:

petros@ubuntu-evm1:~/eth-evm$ geth --datadir ↪/home/petros/eth-evm/data/PrivateBlockchain --networkid 9999 WARN [02-10|15:11:59] No etherbase set and no accounts found ↪as default INFO [02-10|15:11:59] Starting peer-to-peer node ↪instance=Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9.2 INFO [02-10|15:11:59] Allocated cache and file handles ↪database=/home/petros/eth-evm/data/PrivateBlockchain/ ↪geth/chaindata cache=128 handles=1024 WARN [02-10|15:11:59] Upgrading database to use lookup entries INFO [02-10|15:11:59] Initialised chain configuration ↪config="{ChainID: 999 Homestead: 0 DAO: DAOSupport: ↪false EIP150: EIP155: 0 EIP158: 0 Byzantium: ↪Engine: unknown}" INFO [02-10|15:11:59] Disk storage enabled for ethash caches ↪dir=/home/petros/eth-evm/data/PrivateBlockchain/ ↪geth/ethash count=3 INFO [02-10|15:11:59] Disk storage enabled for ethash DAGs ↪dir=/home/petros/.ethash count=2 INFO [02-10|15:11:59] Initialising Ethereum protocol ↪versions="[63 62]" network=9999 INFO [02-10|15:11:59] Database deduplication successful ↪deduped=0 INFO [02-10|15:11:59] Loaded most recent local header ↪number=0 hash=d1a12d...4c8725 td=1024 INFO [02-10|15:11:59] Loaded most recent local full block ↪number=0 hash=d1a12d...4c8725 td=1024 INFO [02-10|15:11:59] Loaded most recent local fast block ↪number=0 hash=d1a12d...4c8725 td=1024 INFO [02-10|15:11:59] Regenerated local transaction journal ↪transactions=0 accounts=0 INFO [02-10|15:11:59] Starting P2P networking INFO [02-10|15:12:01] UDP listener up ↪self=enode://f51957cd4441f19d187f9601541d66dcbaf560 ↪770d3da1db4e71ce5ba3ebc66e60ffe73c2ff01ae683be0527b77c0f96 ↪a178e53b925968b7aea824796e36ad27@[::]:30303 INFO [02-10|15:12:01] IPC endpoint opened: /home/petros/eth-evm/ ↪data/PrivateBlockchain/geth.ipc INFO [02-10|15:12:01] RLPx listener up ↪self=enode://f51957cd4441f19d187f9601541d66dcbaf560 ↪770d3da1db4e71ce5ba3ebc66e60ffe73c2ff01ae683be0527b77c0f96 ↪a178e53b925968b7aea824796e36ad27@[::]:30303

Notice the use of the new parameter, networkid. This networkid helps ensure the privacy of your network. Any number can be used here. I have decided to use 9999. Note that other peers joining your network will need to use the same ID.

Your private network is now live! Remember, every time you need to access your private blockchain, you will need to use these last two commands with the exact same parameters (the Geth tool will not remember it for you):

$ geth --datadir /home/petros/eth-evm/data/PrivateBlockchain ↪init /home/petros/eth-evm/config/Genesis.json $ geth --datadir /home/petros/eth-evm/data/PrivateBlockchain ↪--networkid 9999 Configuring a User Account

So, now that your private blockchain network is up and running, you can start interacting with it. But in order to do so, you need to attach to the running Geth process. Open a second terminal window. The following command will attach to the instance running in the first terminal window and bring you to a JavaScript console:

$ geth attach /home/petros/eth-evm/data/PrivateBlockchain/geth.ipc Welcome to the Geth JavaScript console! instance: Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9.2 modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 ↪personal:1.0 rpc:1.0 txpool:1.0 web3:1.0 >

Time to create a new account that will manipulate the Blockchain network:

> personal.newAccount() Passphrase: Repeat passphrase: "0x92619f0bf91c9a786b8e7570cc538995b163652d"

Remember this string. You'll need it shortly. If you forget this hexadecimal string, you can reprint it to the console by typing:

> eth.coinbase "0x92619f0bf91c9a786b8e7570cc538995b163652d"

Check your ether balance by typing the following script:

> eth.getBalance("0x92619f0bf91c9a786b8e7570cc538995b163652d") 0

Here's another way to check your balance without needing to type the entire hexadecimal string:

> eth.getBalance(eth.coinbase) 0 Mining

Doing real mining in the main ethereum blockchain requires some very specialized hardware, such as dedicated Graphics Processing Units (GPU), like the ones found on the high-end graphics cards mentioned in Part I. However, since you're mining for blocks on a private chain with a low difficulty level, you can do without that requirement. To begin mining, run the following script on the JavaScript console:

> miner.start() null

Updates in the First Terminal Window

You'll observe mining activity in the output logs displayed in the first terminal window:

INFO [02-10|15:14:47] Updated mining threads ↪threads=0 INFO [02-10|15:14:47] Transaction pool price threshold ↪updated price=18000000000 INFO [02-10|15:14:47] Starting mining operation INFO [02-10|15:14:47] Commit new mining work ↪number=1 txs=0 uncles=0 elapsed=186.855us INFO [02-10|15:14:57] Generating DAG in progress ↪epoch=1 percentage=0 elapsed=7.083s INFO [02-10|15:14:59] Successfully sealed new block ↪number=1 hash=c81539...dc9691 INFO [02-10|15:14:59] mined potential block ↪number=1 hash=c81539...dc9691 INFO [02-10|15:14:59] Commit new mining work ↪number=2 txs=0 uncles=0 elapsed=211.208us INFO [02-10|15:15:04] Generating DAG in progress ↪epoch=1 percentage=1 elapsed=13.690s INFO [02-10|15:15:06] Successfully sealed new block ↪number=2 hash=d26dda...e3b26c INFO [02-10|15:15:06] mined potential block ↪number=2 hash=d26dda...e3b26c INFO [02-10|15:15:06] Commit new mining work ↪number=3 txs=0 uncles=0 elapsed=510.357us [ ... ] INFO [02-10|15:15:52] Generating DAG in progress ↪epoch=1 percentage=8 elapsed=1m2.166s INFO [02-10|15:15:55] Successfully sealed new block ↪number=15 hash=d7979f...e89610 INFO [02-10|15:15:55] block reached canonical chain ↪number=10 hash=aedd46...913b66 INFO [02-10|15:15:55] mined potential block ↪number=15 hash=d7979f...e89610 INFO [02-10|15:15:55] Commit new mining work ↪number=16 txs=0 uncles=0 elapsed=105.111us INFO [02-10|15:15:57] Successfully sealed new block ↪number=16 hash=61cf68...b16bf2 INFO [02-10|15:15:57] block reached canonical chain ↪number=11 hash=6b89ff...de8f88 INFO [02-10|15:15:57] mined potential block ↪number=16 hash=61cf68...b16bf2 INFO [02-10|15:15:57] Commit new mining work ↪number=17 txs=0 uncles=0 elapsed=147.31us

Back to the Second Terminal Window

Wait 10–20 seconds, and on the JavaScript console, start checking your balance:

> eth.getBalance(eth.coinbase) 10000000000000000000

Wait some more, and list it again:

> eth.getBalance(eth.coinbase) 75000000000000000000

Remember, this is fake ether, so don't open that bottle of champagne, yet. You are unable to use this ether in the main ethereum network.

To stop the miner, invoke the following script:

> miner.stop() true

Well, you did it. You created your own private blockchain and mined some ether.

Who Will Benefit from This Technology Today and in the Future?

Although the blockchain originally was developed around cryptocurrency (more specifically, bitcoin), its uses don't end there. Today, it may seem like that's the case, but there are untapped industries and markets where blockchain technologies can redefine how transactions are processed. The following are some examples that come to mind.

Improving Smart Contracts

Ethereum, the same open-source blockchain project deployed earlier, already is doing the whole smart-contract thing, but the idea is still in its infancy, and as it matures, it will evolve to meet consumer demands. There's plenty of room for growth in this area. It probably and eventually will creep into governance of companies (such as verifying digital assets, equity and so on), trading stocks, handling intellectual property and managing property ownership, such as land title registration.

Enabling Market Places and Shared Economies

Think of eBay but refocused to be peer-to-peer. This would mean no more transaction fees, but it also will emphasize the importance of your personal reputation, since there will be no single body governing the market in which goods or services are being traded or exchanged.

Crowdfunding

Following in the same direction as my previous remarks about a decentralized marketplace, there also are opportunities for individuals or companies to raise the capital necessary to help "kickstart" their initiatives. Think of a more open and global Kickstarter or GoFundMe.

Multimedia Sharing or Hosting

A peer-to-peer network for aspiring or established musicians definitely could go a long way here—one where the content will reach its intended audiences directly and also avoid those hefty royalty costs paid out to the studios, record labels and content distributors. The same applies to video and image content.

File Storage and Data Management

By enabling a global peer-to-peer network, blockchain technology takes cloud computing to a whole new level. As the technology continues to push itself into existing cloud service markets, it will challenge traditional vendors, including Amazon AWS and even Dropbox and others—and it will do so at a fraction of the price. For example, cold storage data offerings are a multi-hundred billion dollar market today. By distributing your encrypted archives across a global and decentralized network, the need to maintain local data-center equipment by a single entity is reduced significantly.

Social media and how your posted content is managed would change under this model as well. Under the blockchain, Facebook or Twitter or anyone else cannot lay claim to what you choose to share.

Another added benefit to leveraging blockchain here is making use of the cryptography securing your valuable data from getting hacked or lost.

Internet of Things

What is the Internet of Things (IoT)? It is a broad term describing the networked management of very specific electronic devices, which include heating and cooling thermostats, lights, garage doors and more. Using a combination of software, sensors and networking facilities, people can easily enable an environment where they can automate and monitor home and/or business equipment.

Supply Chain Audits

With a distributed public ledger made available to consumers, retailers can't falsify claims made against their products. Consumers will have the ability to verify their sources, be it food, jewelry or anything else.

Identity Management

There isn't much to explain here. The threat is very real. Identity theft never takes a day off. The dated user name/password systems of today have run their course, and it's about time that existing authentication frameworks leverage the cryptographic capabilities offered by the blockchain.

Summary

This revolutionary technology has enabled organizations in ways that weren't possible a decade ago. Its possibilities are enormous, and it seems that any industry dealing with some sort of transaction-based model will be disrupted by the technology. It's only a matter of time until it happens.

Now, what will the future for blockchain look like? At this stage, it's difficult to say. One thing is for certain though; large companies, such as IBM, are investing big into the technology and building their own blockchain infrastructure that can be sold to and used by corporate enterprises and financial institutions. This may create some issues, however. As these large companies build their blockchain infrastructures, they will file for patents to protect their technologies. And with those patents in their arsenal, there exists the possibility that they may move aggressively against the competition in an attempt to discredit them and their value.

Anyway, if you will excuse me, I need to go make some crypto-coin.

Petros Koutoupis