Quantcast
Channel: Planet Ubuntu
Viewing all 12025 articles
Browse latest View live

Sebastien Bacher: Bolt 0.8 update

$
0
0

Christian recently released bolt 0.8, which includes IOMMU support. The Ubuntu security team seemed eager to see that new feature available so I took some time this week to do the update.

Since the new version also featured a new bolt-mock utility and installed tests availability. I used the opportunity that I was updating the package to add an autopkgtest based on the new bolt-tests binary, hopefully that will help us making sure our tb3 supports stays solid in the futur ;-)

The update is available in Debian Experimental and Ubuntu Eoan, enjoy!


The Fridge: Ubuntu Weekly Newsletter Issue 586

$
0
0

Welcome to the Ubuntu Weekly Newsletter, Issue 586 for the week of June 30 – July 6, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Serge Hallyn: Running your own mail server

$
0
0

Motivation

Not too long ago there was some hubub around https://myaccount.google.com/purchases. In brief, if you use google mail, it tracks your purchases through receipts received in email. Now, some people see this as no big deal or even a feature. Others see it as a privacy invasion, and are reminded that all their data can be mined by the email provider and possibly third parties. Of those, some advoate getting a paid email provider. Agreed, that provides less incentive to monetize your data… but only a bit. Eventually, any company, however good its initial intentions, goes through leadership changes, is bought out, or goes bankrupt. At that point, your data is one of the assets being bargained with.

The other alternative, of course, is to run your own mail server. I won’t lie – this is not for everyone. But it’s not as bad as some make out. I recently reinstalled mine, so I wrote down the steps I took, and will leave them here. I’ve been holding onto this for at least 6 months hoping to eventually run through them again to work out some of the finer details. That hasn’t happened yet, so I’ll just post what I have now as a start.

Running your own mail server is not free. In particular, you’ll need to pay for a domain name ($10-15/year), and some place to run the mail server. If you have an always-on machine at home, and stable IP address, then you can run it there. You can pay for a tiny cloud instance on amazon/rackspace/digitalocean/etc. There are cheaper options (including “one year free” amazon micro instances), but a small digitalocean instance will be $5/month. Personally, I keep a large server online for running many VMs and containers, and run the mail server there.

You will also need a certificate. That’s now easy and free with letsencrypt.

There are also some non-monetary costs. You may get a bit more spam. And once in awhile, you may run into a case where your mail server is being rejected by another.

On the other hand, the server is entirely yours. You can create as many accounts for individual purposes as you like. You can point multiple domain names at it, so that you don’t give away your primary one for every little purchase you make. Ten, twenty years from now, you can still have all your friends’ and family emails in the same place, in the same format. This last one is too often overlooked, yet one of the best advantages of open source for all applications.

Setup

I picked up a new hosted server, and installed Ubuntu 18.04 on it. First thing I did was go to my dns provider and register a name for it, and set up the new mx record to point to it. The details of this vary a bit depending on your dns provider, so I won’t go into detail here (I’ll do a post if people ask for clarification). If you’re looking for a provider, I do recommend zoneedit.

I installed lxc and created a new container in which to run my mailserver:

apt-get -y install lxc1
lxc-create -t download -n mail -- -d ubuntu -r bionic -a amd64

I gave it a static ip address through dnsmasq:

echo "dhcp-host=mail,10.0.5.155" >> /etc/lxc/dnsmasq.conf
echo "LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf" >> /etc/default/lxc-net
sudo systemctl stop lxc-net
sudo systemctl start lxc-net

The point of the static ip address is to facilitate forwarding mail related ports into the container. I did this with a script started at boot by systemd:

cat > /usr/bin/lxc-ports-fwd << EOF
nic=enp0s31f6
iptables -t nat -A PREROUTING -p tcp -i ${nic} --dport 25 -j DNAT --to-destination 10.0.5.155:25
iptables -t nat -A PREROUTING -p tcp -i ${nic} --dport 465 -j DNAT --to-destination 10.0.5.155:465
iptables -t nat -A PREROUTING -p tcp -i ${nic} --dport 993 -j DNAT --to-destination 10.0.5.155:993
iptables -t nat -A PREROUTING -p tcp -i ${nic} --dport 587 -j DNAT --to-destination 10.0.5.155:587
EOF
chmod 755 /usr/bin/lxc-ports-fwd
cat > /etc/systemd/system/container-ports-forward.service << EOF
[Unit]
Description=Bring up port forwards for lxc
After=lxc-net.target
Before=lxc.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/container-ports-fwd

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable container-ports-fwd
systemctl start container-ports-fwd

I also installed and ran letsencrypt on the host:

sudo apt -y install letsencrypt
letsencrypt -d mail.example.org -m me@my.mail certonly

Next I started up the container and installed the basic mail tools:

sudo lxc-start -n mail
sudo lxc-attach -n mail apt -y install dovecot-imapd postfix mutt

New since my last mail server install is the removal of dovecot-postfix in favor of the mail-stack-delivery package:

sudo lxc-attach -n mail apt -y install mail-stack-delivery

After this I copied the letsencrypt keys into the container

lxc-attach -n mail -- mkdir -p /etc/letsencrypt/live/mail.example.org
cp /etc/letsencrypt/live/mail.example.org/* /var/lib/lxc/mail/rootfs//etc/letsencrypt/live/mail.example.org

and edited
/etc/postfix/main.cf and /etc/dovecot/conf.d/10-ssl.conf to point to those using these lines:

smtpd_tls_cert_file = /etc/letsencrypt/live/mail.example.org/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/mail.example.org/privkey.pem

This is enough to be able to send and receive mail. Personally, I want to run this server in a uid-mapped container and from a luks-encrypted device. While you can setup the whole container that way from the start, for simplicity of examples, you could at this point copy over and uid-shift the container contents to a new device, and update the container configuration accordingly.

Some notes which I should elaborate on later:

    • SPF

    • postscreen setup
    • /etc/postfix/master.cf, i.e. uncomment smtps
    • /etc/dovecot/conf.d details
  • Canonical Design Team: 如何在Mac上配置Kubernetes

    $
    0
    0

    Mac用户可使用MicroK8s运行Kubernetes环境进而开发、测试应用。通过下面的步骤可轻松搭建此环境。

    Kubernetes on Mac install and dashboardA clean install of MicroK8s on macOS and the Grafana dashboard.

    MicroK8s是一个Ubuntu推出的一个本地的Kubernetes版本。它是一个轻量级的snap应用,可安装到PC上作为一个单节点集群使用。尽管MicroK8s仅针对Linux构建,但是也可以在Mac上启Ubuntu VM来实现。

    MicroK8s Ubuntu上和任何支持snap的操作系统 的Kubernetes原生服务都运行在。这对于开发应用,创建简单的K8s集群和本地微服务开发非常有帮助,所有的开发工作最终都还是需要部署的。

    MicroK8s提供另一个级别的可靠性因为它提供了与当前Kubernetes版本一致的开发环境。 在最新的上游K8s发布后的一周内,在Ubuntu上即可使用。(以下简称Kubernetes为K8s)

    在Mac上配置Kubernetes

    K8s和MicroK8s都需要一个Linux内核来工作,因此2者都需要Ubuntu环境。Mac用户可使用Multipass,此工具被设计为方便用户在Mac、Windows、Linux上开启Ubuntu VM(虚拟)环境。

    下面的教程将介绍在Mac上配置Multipass和运行K8s。

    步骤1:使用Multipass为Mac安装一个VM

    最新的Multipass的程序包可在Github上找到,双击.pkg即可安装。用MicroK8s来启动一个VM:

    multipass launch --name microk8s-vm --mem 4G --disk 40G
    multipass exec microk8s-vm -- sudo snap install microk8s --classic
    multipass exec microk8s-vm -- sudo iptables -P FORWARD ACCEPT

    确保为主机保留足够的资源。上述命令表示我们创建了一个名字为microk8s-vm的VM,分配了4GB内存和40GB硬盘。

    使用以下命令来查看VM分配的IP地址:(记一下下面的IP,我们将从此开始)

    multipass list
    Name         State IPv4            Release
    microk8s-vm  RUNNING 192.168.64.1   Ubuntu 18.04 LTS

    步骤2:在VM上与MicroK8s互动

    可使用以下3种方式:

    命令行:用Multipass的shell 提示:

    multipass shell microk8s-vm                                                                                     

    用multipass exec来执行一个命令(输入后无提示)

    multipass exec microk8s-vm -- /snap/bin/microk8s.status                             

    调用运行在VM的K8s API服务器,这里使用MicroK8s kubeconfig文件和一个本地的安装的kubectl来访问VM内的K8s,运行以下命令:

    multipass exec microk8s-vm -- /snap/bin/microk8s.config > kubeconfig     

    下一步,在本地主机安装kubectl,然后使用kubeconfig:

    kubectl --kubeconfig=kubeconfig get all --all-namespaces            
    NAMESPACE  NAME  TYPE  CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE        
    Default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 3m12s

    步骤3:用Mutlpass服务访问VM并开启MicroK8s组件

    配置基础的MicroK8s组件是开启Grafana仪表,下面我们将展示一步开启Grafana,监视和分析一个MicroK8s实例。可执行以下命令:

    multipass exec microk8s-vm -- /snap/bin/microk8s.enable dns dashboard
    Enabling DNS
    Applying manifest
    service/kube-dns created
    serviceaccount/kube-dns created
    configmap/kube-dns created
    deployment.extensions/kube-dns created
    Restarting kubelet
    DNS is enabled
    Enabling dashboard
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    service/monitoring-grafana created
    service/monitoring-influxdb created
    service/heapster created
    deployment.extensions/monitoring-influxdb-grafana-v4 created
    serviceaccount/heapster created
    configmap/heapster-config created
    configmap/eventer-config created
    deployment.extesions/heapster-v1.5.2 created
    dashboard enabled

    接下来,用下面命令检查部署进程:

    multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl get all --all-namespaces                                                                                                                        

    返回信息如下:

    一旦所有的必要服务已开启,接下来使用以下的链接访问仪表。命令如下:

    multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl cluster-info  
    Kubernetes master is running at https://127.0.0.1:16443
    Heapster is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/heapster/proxy
    KubeDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    Grafana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
    InfluxDB is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    如果我们在VM内,可以用此链接来访问Grafana仪表。不过,我们可以通过代理在主机上访问。

    multipass exec microk8s-vm -- /snap/bin/microk8s.kubectl proxy --address='0.0.0.0' --accept-hosts='.*' 
    Starting to serve on [::][::]:8001

    保持终端运行状态,记一下端口号(8001),我们在下一步需要用到。要访问Grafana仪表,我们需要修改VM内仪表的链接:

  • 使用VM的IP替换127.0.0.1(multipass info microk8s-vm)
  • 将端口(16443)替换为代理端口:8001。https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
  • 在浏览器内输入这个链接地址,你将看到Grafana仪表,如下图:
  • 总结

    使用MicroK8s在本地开发和测试应用,将使得团队在部署上更快,这对于开发者和DevOp团队来说是非常有价值和意义的。

    The post 如何在Mac上配置Kubernetes appeared first on Ubuntu Blog.

    Canonical Design Team: MAAS 2.6 – ESXi storage, multiple gateways, HTTP boot and more

    $
    0
    0

    Canonical is happy to announce the availability of MAAS 2.6. This new release introduces a range of very exciting features and several improvements that enhances MAAS across various areas. Let’s talk about a few notable ones:

    Growing support for ESXi Datastores

    MAAS has expanded its support of ESXi by allowing administrators to create & configure VMFS datastores on physically connected disks.

    MAAS 2.5 introduced the ability to deploy VMWare’s ESXi. This, however, was limited in its ability to configure storage devices by just being able to select the disk in which to deploy the operating system on. As of 2.6, MAAS now also provides the ability to configure datastores. This allows administrators to create one or more datastores, using one or more physical disks. More information is available in https://docs.maas.io/2.6/en/installconfig-vmfs-datastores .

    More information on how to create MAAS ESX images is available in https://docs.maas.io/2.6/en/installconfig-images-vmware .

    Multiple default gateways

    MAAS 2.6 introduces a network configuration change (for Ubuntu), where it will leverage the use of source routing to support multiple default gateways.

    As of MAAS 2.5, all deployed machines were configured with a single default gateway. By doing so, if a machine were to be configured in multiple subnets (that had gateways defined), all outgoing traffic would go out the default gateway even though the traffic was intended to go out through the subnets configured gateway.

    To address this, MAAS 2.6 has changed the way it configures the network when a machine has multiple interfaces in different subnets, to ensure that all traffic that is meant to go through the subnet’s gateway actually does.

    Please note that this is currently limited to Ubuntu provided that this depends on source routing using netplan, and this is only currently supported by cloud-init in Ubuntu.

    Leveraging HTTP boot for most of the PXE process

    MAAS 2.6 is now leveraging the use of HTTP (as much as possible) to boot machines over the PXE process rather than solely rely on TFTP. The reasons for the change are not only to support newer standards/features, but also to improve PXE boot performance. As such, you should now expect that:

    • UEFI systems that implement the 2.5 spec can now fully boot over HTTP.
    • KVM’s will rely on iPXE to perform HTTP boot
    • Other architectures that support HTTP boot, such as arm64, will prefer it over tftp.

    Prometheus metrics

    MAAS now exposes Prometheus data that can be used to either track statistics or performance.  For more information into what metrics are exposed, please refer to https://discourse.maas.io/t/maas-2-6-0-released/724 and to learn how to enable them, refer to https://docs.maas.io/2.6/en/manage-prometheus-metrics .

    Other features and improvements

    A more extensive list of features and improvements introduced in MAAS 2.6 includes:

    • Performance – Leverage HTTP for most of the PXE process
    • Performance – Track stats and metrics with Prometheus
    • User experience – Provides a more granular boot output
    • Networking – Multiple default gateways
    • Power control – Added support for redfish
    • Power control – Added support for OpenBMC
    • ESXi – Support configuring datastores
    • ESXi – Support registering to vCenter
    • User experience – Dismiss/supress failed tests
    • User experience – Clear discovered devices
    • User experience – Added note to machine
    • User experience – Added grouping to machine listing page

    Please refer to https://discourse.maas.io/t/maas-2-6-0-released/724/2 for more information.

    The post MAAS 2.6 – ESXi storage, multiple gateways, HTTP boot and more appeared first on Ubuntu Blog.

    Ubuntu Podcast from the UK LoCo: S12E14 – Sega Rally Championship

    $
    0
    0

    This week we’ve been installing macOS and Windows on a Macbook Pro and a Dell XPS 15. We discuss Running Challenges, bring you some command line love and go over all your feedback.

    It’s Season 12 Episode 14 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Laura Cowen are connected and speaking to your brain.

    In this week’s show:

    sudo fatsort -n /dev/sdb1
    • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • Image taken from Sega Rally Championship arcade machine manufactured in 1994 by Sega.

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

    Canonical Design Team: Deploying Kubernetes at the edge – Part I: building blocks

    $
    0
    0

    Edge computing continues to gain momentum to help solve unique challenges across telco, media, transportation, logistics, agricultural and other market segments. If you are new to edge computing architectures, of which there are several, the following diagram is a simple abstraction for emerging architectures:

    In this diagram you can see that an edge cloud sits next to field devices. In fact, there is a concept of extreme edge computing which puts computing resources in the field – which is the circle on the far left. An example of extreme edge computing is a gateway device that connects to all of your office or home appliances and sensors.

    What exactly is edge computing? Edge computing is a variant of cloud computing, with your infrastructure services – compute, storage, and networking – placed physically closer to the field devices that generate data. Edge computing allows you to place applications and services closer to the source of the data, which gives you the dual benefit of lower latency and lower Internet traffic. Lower latency boosts the performance of field devices by enabling them to not only respond quicker, but to also respond to more events. And lowering Internet traffic helps reduce costs and increase overall throughput – your core datacenter can support more field devices. Whether an application or service lives in the edge cloud or the core datacenter will depend on the use case.

    How can you create an edge cloud? Edge clouds should have at least two layers – both layers will maximise operational effectiveness and developer productivity – and each layer is constructed differently. 

    The first layer is the Infrastructure-as-a-Service (IaaS) layer. In addition to providing compute and storage resources, the IaaS layer should satisfy the network performance requirements of ultra-low latency and high bandwidth.

    The second layer is the Kubernetes layer, which provides a common platform to run your applications and services. Whereas using Kubernetes for this layer is optional, it has proven to be an effective platform for those organisations leveraging edge computing today. You can deploy Kubernetes to field devices, edge clouds, core datacenters, and the public cloud. This multi-cloud deployment capability offers you complete flexibility to deploy your workloads anywhere you choose. Kubernetes offers your developers the ability to simplify their devops practices and minimise time spent integrating with heterogeneous operating environments.

    Okay, but how can I deploy these layers? At Canonical, we accomplish this through the use of well defined, purpose-built technology primitives. Let’s start with the IaaS layer, which the Kubernetes layer relies upon.

    Physical infrastructure lifecycle management

    The first step is to think about the physical infrastructure, and what technology can be used to manage the infrastructure effectively, converting the raw hardware into an IaaS layer. Metal-as-a-Service (MAAS) has proven to be effective in this area. MAAS provides the operational primitives that can be used for hardware discovery, giving you the flexibility to allocate compute resources and repurpose them dynamically. These primitives expose bare metal servers to a higher level of orchestration through open APIs, much like you would experience with OpenStack and public clouds.

    With the latest MAAS release you can automatically create edge clouds based on KVM pods, which effectively enable operators to create virtual machines with pre-defined sets of resources (RAM, CPU, storage and over-subscription ratios). You can do this through the CLI, Web UI or the MAAS API. You can use your own automation framework or use Juju, Canonical’s advanced orchestration solution.

    MAAS can also be deployed in a very optimised fashion to run on top of the rack switches – just as we demonstrated during the OpenStack Summit in Berlin.


    Image 1: OpenStack Summit Demo : MAAS running on ToR switch (Juniper QFX5100AA

    Edge application orchestration

    Once discovery and provisioning of physical infrastructure for the edge cloud is complete, the second step is to choose an orchestration tool that will make it easy to install Kubernetes, or any software, on your edge infrastructure. Juju allows you to do just that – you can easily install Charmed Kubernetes, a fully compliant and upstream Kubernetes. And with Kubernetes you can install containerised workloads, offering them the highest possible performance. In the telecommunications sector, workloads like Container Network Functions (CNFs) are well suited to this architecture.

    There are additional benefits to Charmed Kubernetes. With the ability to run in a virtualised environment or directly on bare metal, fully automated Charmed Kubernetes deployments are designed with built-in high availability, allowing for in place, zero downtime upgrades. This is a proven, truly resilient edge infrastructure architecture and solution. An additional benefit of Charmed Kubernetes is its ability to automatically detect and configure GPGPU resources for accelerated AI model inferencing and containerised transcoding workloads.

    Next steps

    Once the proper technology primitives are selected, it is time to deploy the environment and start onboarding and validating the application. The next part of this blog series will include hands-on examples of what to do.

    The post Deploying Kubernetes at the edge – Part I: building blocks appeared first on Ubuntu Blog.

    Jonathan Carter: My Debian 10 (buster) Report

    $
    0
    0

    In the early hours of Sunday morning (my time), Debian 10 (buster) was released. It’s amazing to be a part of an organisation where so many people work so hard to pull together and make something like this happen. Creating and supporting a stable release can be tedious work, but it’s essential for any kind of large-scale or long-term deployments. I feel honored to have had a small part in this release

    Debian Live

    My primary focus area for this release was to get Debian live images in a good shape. It’s not perfect yet, but I think we made some headway. The out of box experiences for the desktop environments on live images are better, and we added a new graphical installer that makes Debian easier to install for the average laptop/desktop user. For the bullseye release I intend to ramp up quality efforts and have a bunch of ideas to make that happen, but more on that another time.

    Calamares installer on Cinnamon live image.

    Other new stuff I’ve been working on in the Buster cycle

    Gamemode

    Gamemode is a library and tool that changes your computer’s settings for maximum performance when you launch a game. Some new games automatically invoke Gamemode when they’re launched, but for most games you have to do it manually, check their GitHub page for documentation.

    Innocent de Marchi Packages

    I was sad to learn about the passing of Innocent de Marchi, a math teacher who was also a Debian contributor for whom I’ve sponsored a few packages before. I didn’t know him personally but learned that he was really loved in his community, I’m continuing to maintain some of his packages that I also had an interest in:

    • calcoo – generic lightweight graphical calculator app that can be useful on desktop environments that doesn’t have one
    • connectagram – a word unscrambling game that gets its words from wiktionary
    • fracplanet – fractal planet generator
    • fractalnow – fast, advanced fractal generator
    • gnubik – 3D Rubik’s cube game
    • tanglet – single player word finding game based on Boggle
    • tetzle – jigsaw puzzle game (was also Debian package of the Day #44)
    • xabacus – simulation of the ancient calculator

    Powerline Goodies

    I wrote a blog post on vim-airline and powerlevel9k shortly after packaging those: New powerline goodies in Debian.

    Debian Desktop

    I helped co-ordinate the artwork for the Buster release, although Laura Arjona did most of the heavy lifting on that. I updated some of the artwork in the desktop-base package and in debian-installer. Working on the artwork packages exposed me to some of their bugs but not in time to fix them for buster, so that will be a goal for bullseye. I also packaged the font that’s widely used in the buster artwork called quicksand (Debian package: fonts-quicksand). This allows SVG versions of the artwork in the system to display with the correct font.

    Bundlewrap

    Bundlewrap is a configuration management system written in Python. If you’re familiar with bcfg2 and Ansible, the concepts in Bundlewrap will look very familiar to you. It’s not as featureful as either of those systems, but what it lacks in advanced features it more than makes up for in ease of use and how easy it is to learn. It’s immediately useful for the large amount of cases where you want to install some packages and manage some config files based on conditions with templates. For anything else you might need you can write small Python modules.

    Catimg

    catimg is a tool that converts jpeg, png, ico and gif files to terminal output. This was also Debian Package of the day #26.

    Gnome Shell Extensions

    • gnome-shell-extension-dash-to-panel: dash-to-panel is an essential shell extension for me, and does more for me to make Gnome 3 feel like Gnome 2.x for me than the classic mode does. It’s the easiest way to get a nice single panel on the top of the screen that contains everything that’s useful.
    • gnome-shell-extension-hide-veth: If you use LXC or Docker (or similar), you’ll probably be somewhat annoyed at all the ‘veth’ interfaces you see in network manager. This extension will hide those from the GUI.
    • gnome-shell-extension-no-annoyance:No annoyance fixes something that should really be configurable in Gnome by default. It removes all those nasty “Window is ready” notifications that are intrusive and distracting.

    Other

    That’s a wrap for my new Debian packages I maintain in Buster. There’s a lot more that I’d like to talk about that happened during this cycle, like that crazy month when I ran for DPL! And also about DebConf stuff, but I’m all out of time and on that note, I’m heading to DebCamp/DebConf in around 12 hours and look forward to seeing many of my Debian colleagues there :-)


    Benjamin Mako Hill: Hairdressers with Supposedly Funny Pun Names I’ve Visited Recently

    $
    0
    0

    Mika and I recently spent two weeks biking home to Seattle from our year in Palo Alto. The route was ~1400 kilometers and took us past 10 volcanoes and 4 hot springs.

    Route of our bike trip from Davis, CA to Oregon City, OR. An elevation profile is also shown.

    To my delight, the route also took us past at least 8 hairdressers with supposedly funny pun names! Plus two in Oakland on our way out.

    As a result of this trip, I’ve now made 24 contributions to the Hairdressers with Supposedly Funny Pun Names Flickr group photo pool.

    Stephen Michael Kellat: Maintaining Independent Infrastructure

    $
    0
    0

    One thing I end up embarassing myself about sometimes in theUbuntu Podcast telegram chatter is that I end up buying and selling tiny amounts of shares on the US stock markets. All I can say is that I got spooked by the 35 day "government shutdown" at the start of the calendar year when I was stuck working without pay as a federal civil servant. Granted I did get back pay but the Human Capital Office at work is still fiddling with things even now in terms of getting payroll records and other matters fixed. I generally buy shares in companies that pay dividends and then I take the dividends as cash. At work we refer to that as "unearned income" especially as it is taxed at a rate different from the one applied to my wages.

    My portfolio is somewhat weird. I am rather heavily invested in shipping whether it happens to be oil tankers or dry bulk cargo ships. In contrast I have almost nothing invested in technology companies. There aren't many "open source" companies available on the open stock market and the ones out there either I can't afford to buy a single share of or they violate my portfolio rule that stocks held must pay a divided of some sort. Too many companies in the computer tech world appear to make money but don't send any profits back to shareholders as their dividends are stuck at USD$0.00.

    All that being said, I found a very important post on Mastodon to be of interest. The post, located at https://fosstodon.org/@badrihippo/102426802394820437, stated:

    gPodder.net is looking for a new maintainer!

    If nobody comes forward by 2020, it'll be forced to shut down. Please boost to spread the word.

    https://github.com/gpodder/mygpo/blob/master/maintainer-needed.md

    Note that this is the podcast-sharing website: I'm not sure about the status of the app, but I think that's still doing fine.

    #gPodder #podcasts #python #webdev #helpwanted

    Now, you might wonder what this little piece of infrastructure happens to be. It is actually somewhat critical to have a free and open culture. The gPodder.net site is a critically important site for podcast discovery. Unlike Apple Podcasts, gPodder.net is integrated into many podcast applications across many platforms.

    As a shareholder in multiple media companies (especially Scripps E W Co, Salem Media Group, iHeartMedia, Entravision Communications, Townsquare Media, Beasley Broadcast Group Inc, and Entercom Communications Corp) I have seen the answer the old media has made to the more free-wheeling world of podcasting and "new media" that I previously did quite a bit in. Scripps, a broadcast conglomerate, owns and operates Stitcher in addition to its broadcast television holdings as well as the Newsy cable television channel and website. iHeartMedia, the massive radio conglomerate that just emerged from bankruptcy reorganization, now boasts that its rather closed garden of an app is number one for podcasts in the United States and is the easiest way to listen. I previously held shares in satellite radio service SiriusXM and, would you imagine, they also happen to own Pandora which now also provides some listings of podcasts in its walled garden. Spotify remains an independent company for the moment but you can listen to the Ubuntu Podcast within its walled garden too.

    If I ever get back into the swing of podcast, I have a daunting task that is growing of just trying to submit feeds to each individual walled garden. The number of them is growing. You don't have to be a doctrinaire freedom by any means necessary person to see that that paradigm might be a bit stifling. I should caution that it isn't confined to the USA as the British Broadcasting Corporation is still fiddling with its BBC Sounds and iPlayer applications.

    I know I flat out do not have the programming skills to help. I do have the skill to explain why it is a social good to keepgPodder.net alive, though. Our world of podcasts should not be homogenized and formatted like radio stations have become in the United States. Nobody needs to imitate Ira Glass to be authentic.

    A more freeform cultural world is possible if we keep up the architecture to make it happen. Right now gPodder.net is all we have and it would be a shame if it closed down. Now is a great time to lend a hand to keep a great piece of infrastructure alive to preserve free culture. Fullest details as to what this involves were posted to GitHub.

    Canonical Design Team: Octave turns to snaps to reduce dependency on Linux distribution maintainers

    $
    0
    0

    Octave is a numerical computing environment largely compatible with MATLAB. As free software, Octave runs on GNU/Linux, macOS, BSD, and Windows. At the 2019 Snapcraft Summit, Mike Miller and Jordi Gutiérrez Hermoso of the Octave team worked on creating an Octave snap in stable and beta versions for the Snap Store

    As Mike and Jordi explained, “Octave is currently packaged for most of the major distributions, but sometimes it’s older than we would like.” The goal of the Octave snap was to allow users to easily access the current stable release of the software, independently of Linux distribution release cycles. A snap would also let them release Octave on distributions not covered so far.

    Before starting with snaps, Octave depended on distribution maintainers, including those of CentOS, Debian, Fedora, and Ubuntu, for its binary packaging. With snaps, the situation has improved. The Octave team can now push out a release as soon as it ready for users eager to get it now, while other more conservative users wait for more traditional packages from their distribution. Mike and Jordi envisioned this to be the biggest benefit of coming to the Summit and creating an Octave snap.

    They also foresee a reduction in the amount of maintenance needed, using one package across many Linux distributions. The Snap Store will help users discover Octave easier while the Octave homepage will also feature snaps as a download option.

    Nevertheless, there was a learning curve: “We’re more used to Debian packaging, and snap packaging has different quirks that we’re not used to,” comments Jordi. In the first day of their snap creation, it took time to set up the environment and getting an initial build to work. “Time was needed for recompiling Octave each time with fixes for re-testing, as the application is large and has many dependencies, all of which must be compiled,” explains Mike. 

    The Octave team used Multipass on Linux to help build their snap and found that they “didn’t even notice it was there” for the most part. As Mike explains, “I had no issues other than a couple of teething problems due to the large build that Octave requires. However, after a little bit of digging in the documentation and asking the right people this was soon solved.”  

    Advice that they would pass on to others about using snaps is to avoid thinking that they are the same as containers. Mike and Jordi speak from experience because they started with this preconception as they explain, “this made it difficult because everything we did in the build environment had to be readjusted once we wanted to go into the runtime environment. We had to change the paths of everything.” Some functions that happened automatically in other packaging methods, like including libraries according to dependencies, must also be done manually for snaps.

    Coming to an event like the Summit is another tip that they would give to would-be snap developers. As Mike and Jordi put it, “reading the documentation and doing this ourselves would have taken longer than having everyone here. The fact we can just walk around and say hey, how do we do this and get that help.”

    Install Octave as a snap here.

    The post Octave turns to snaps to reduce dependency on Linux distribution maintainers appeared first on Ubuntu Blog.

    Canonical Design Team: 在边缘端部署Kubernetes第一部分——模块搭建

    $
    0
    0

    为帮助解决电信,多媒体,运输,物流,农业和其他细分市场的独特挑战,边缘计算继续备受关注,迎来了大增长。如果你刚接触以上几个边缘计算体系结构,下图是新兴架构体系的简单抽象。

    在此图中,你可以看到边缘云位于现场设备旁边。事实上,有一个极端边缘计算的概念,它将计算资源放在现场设备上——即最左边的圆圈。连接你办公室,家电和
    所有传感器网关设备就是一个极端边缘计算的例子。

    到底什么是边缘计算呢?

    边缘计算是云计算的一种变体,你的基础设施服务(计算,存储和网络)在物理上更靠近生成数据的现场设备。从而为你提供更低延迟和更低网络流量的双重优势。低延迟可提高现场设备的性能,使其不仅能够更快地响应,还能响应更多事件。降低网络流量有助于降低成本并提高整体吞吐量——你的核心数据中心可以支持更多现场设备。应用程序或服务是否位于边缘云或核心数据中心将取决于用例。

    如何才能创建边缘云呢?

    边缘云服务应该有至少两层,两层都将最大限度地提高操作效率和开发人员的工作效率,且每层都以不同的方式构建。

    第一层是基础设施即服务(IaaS),除此还提供计算和存储资源,IaaS层应该满足超低延迟和高带宽的性能需求。

    第二层是Kubernetes层,提供一个让你运行应用和服务的通用平台。当然,是否用Kubernetes是可选的,但今天它已经被证明是一个让企业和组织充分利用边缘计算能力的平台。你可以在现场设备、边缘云、核心数据中心和公有云上部署你的Kubernetes。这种多云部署功能为你提供了在选择的任何地方部署工作负载的完全灵活性。Kubernetes为你的开发人员提供了简化其devop实践的能力,并最大限度地缩短了与异构操作环境集成所花费的时间。

    接下来的问题是,怎么部署这些层?在Canonical,我们通过使用定义明确的专用技术来实现这一目标。让我们先开始Kubernetes所需要的IaaS层。

    物理基础设施生命周期管理

    第一步是考虑物理基础设施,什么技术可以更有效地管理基础设施,将原始的硬件转换到你的IaaS层。在这方面,Metal-as-a-Service (MAAS),裸机即为服务已经被证明了其具有的高效性。MAAS提供可用于硬件发现的底层系统,使你可以灵活地分配计算资源并动态重新利用它们。这些底层系统通过开放API将裸机服务器暴露给更高级别的业务流程,就像你使用OpenStack和公共云一样。

    随着最新版MAAS发布,你可以基于KVM pod自动创建边缘云,从而有效地使操作者能够创建具有预定义资源集(内存,处理器,存储和超额预订比)的虚拟机。你可以通过命令行和浏览器界面以及MAAS API来完成上面操作。你也可以是用Canonical的高级编排解决方案Juju来构建自己的自动化框架。

    正如我们在柏林的OpenStack峰会期间所展示的那样。MAAS还可以被优化过的方式部署以便在机架交换机上运行。

    边缘应用的编排

    一旦边缘云的物理基础架构的发现和配置完成,第二步就是选择一个业务流程工具,以便在边缘基础架构上轻松安装Kubernetes或其他软件。你可以通过Juju简单安装一个完全兼容上游Kubernetes的Charmed Kubernetes。使用Kubernetes时,你可以安装容器化工作负载,为其提供最高性能。 在电信领域,容器网络功能(CNFs)等工作负载非常适合这种架构。

    Charmed Kubernetes还有其他的优点。能够在虚拟化环境中运行或直接在裸机上运行,全自动Charmed Kubernetes部署内置高可用性设计,允许就地,零停机升级。这些都是经过验证的,真正具有弹性的边缘基础架构和解决方案。Charmed Kubernetes的另一个好处是能够自动检测和配置GPGPU资源,以加速AI模型论证和容器化转码工作负载。

    下一步

    当选择好了合适的技术,现在是时候部署环境和开始验证程序。下一部分的博客文章将包含实际操作的例子。

    The post 在边缘端部署Kubernetes第一部分——模块搭建 appeared first on Ubuntu Blog.

    Thierry Carrez: Open source in 2019, Part 3/3

    $
    0
    0

    21 years in, the landscape around open source evolved a lot. Inpart 1 andpart 2 of this 3-part series, I explained why today, while open source is more necessary than ever, it appears to no longer be sufficient. In this part, I'll discuss what we, open source enthusiasts and advocates, can do about that.

    This is not a call to change open source

    First, let me clarify what we should not do.

    As mentioned in part 2, since open source was coined in 1998, software companies have evolved ways to retain control while producing open source software, and in that process stripped users of some of the traditional benefits associated with F/OSS. But those companies were still abiding to the terms of the open source licenses, giving users a clear base set of freedoms and rights.

    Over the past year, a number of those companies have decided that they wanted even more control, in particular control of any revenue associated with the open source software. They proposed new licenses, removing established freedoms and rights in order to be able to assert that level of control. The open source definition defines those minimal freedoms and rights that any open source software should have, so the Open Source Initiative (OSI), as steadfast guardians of that definition, rightfully resisted those attempts.

    Those companies quickly switched to attacking OSI's legitimacy, pitching "Open Source" more as a broad category than a clear set of freedoms and rights. And they created new licenses, with deceptive naming ("community", "commons", "public"...) in an effort to blur the lines and retain some of the open source definition aura for their now-proprietary software.

    The solution is not in redefining open source, or claiming it's no longer relevant. Open source is not a business model, or a constantly evolving way to produce software. It is a base set of user freedoms and rights expressed in the license the software is published under. Like all standards, its value resides in its permanence.

    Yes, I'm of the opinion that today, "open source" is not enough. Yes, we need to go beyond open source. But in order to do that, we need to base that additional layer on a solid foundation: theopen source definition.

    That makes the work of the OSI more important than ever. Open source used to be attacked from the outside, proprietary software companies claiming open source software was inferior or dangerous. Those were clear attacks that were relatively easy to resist: it was mostly education and advocacy, and ultimately the quality of open source software could be used to prove our point. Now it's attacked from the inside, by companies traditionally producing open source software, claiming that it should change to better fit their business models. We need to go back to the basics and explain why those rights and freedoms matter, and why blurring the lines ultimately weakens everyone. We need a strong OSI to lead that new fight, because it is far from over.

    A taxonomy of open source production models

    As I argued in previous parts, how open source is built ultimately impacts the benefits users get. A lot of us know that, and we all came up with our own vocabulary to describe those various ways open source is produced today.

    Even within a given model (say open collaboration between equals on a level playing field), we use different sets of principles: the OpenStack Foundation has the 4 Opens (open source, open development, open design, open community), the Eclipse Foundation has the Open Source Rules of Engagement (open, transparent, meritocracy), the Apache Foundation has the Apache Way... We all advocate for our own variant, focusing on differences rather than what we have in common: the key benefits those variants all enable.

    This abundance of slightly-different vocabulary makes it difficult to rally around and communicate efficiently. If we have no clear way to differentiate good all-benefits-included open source from twisted some-benefits-withheld open source, the confusion (where all open source is considered equal) benefits the twisted production models. I think it is time for us to regroup, and converge around a clear, common classification of open source production models.

    We need to classify those models based on which benefits they guarantee to the users of the produced software. Open-core does not guarantee availability, single-vendor does not provide sustainability nor does it allow to efficiently engage and influence the direction of the software, while open-collaboration gives you all three.

    Once we have this classification, we'll need to heavily communicate around it, with a single voice. As long as we use slightly different terms (or mean slightly different things when using common terms), we maintain confusion which ultimately benefits the most restrictive models.

    Get together

    Beyond that, I think we need to talk more. Open source conferences used to be all about education and advocacy: what is this weird way of producing software, and why you should probably be interested in it. Once open source became ubiquitous, those style of horizontal open source conferences became less relevant, and were soon replaced by more vertical conferences around a specific stack or a specific use case.

    This is a good evolution: this is what winning looks like. The issue is:the future of open source is not discussed anymore. We rest on our laurels, while the world continually evolves and adapts. Some open source conference islands may still exist, with high-level keynotes still raising the issues, but those are generally one-way conversations.

    To do this important work of converging vocabulary and defining common standards on how open source is produced, Twitter won't cut it. To bootstrap the effort we'll need to meet, get around a table and take the time to discuss specific issues together. Ideally that would be done around some other event(s) to avoid extra travel.

    And we need to do that soon. This work is becoming urgent. "Open source" as a standard has lots of value because of all the user benefits traditionally associated with free and open source software. That created an aura that all open source software still benefits from today. But that aura is weakening over time, thanks to twisted production models. How much more single-vendor open source can we afford until "open source" no longer means you can engage with the community and influence the direction of the software ?

    So here is my call to action, which concludes this series.

    In 2019, open source is more important than ever. Open source has not "won", this is a continuous effort, and we are today at a critical junction. I think open source advocates and enthusiasts need to get together, defining clear, standard terminology on how open source software is built, and start communicate heavily around it with a single voice. And beyond that, we need to create forums where those questions on the future of open source are discussed. Because whatever battles you win today, the world does not stop evolving and adapting.

    Obviously I don't have all the answers. And there are lots of interesting questions. It's just time we have a place to ask those questions and discuss the answers. If you are interested and want to get involved, feel free to contact me.

    Full Circle Magazine: Full Circle Weekly News #139

    $
    0
    0

    System 76’s Linux-powered Thelio desktop now available with 3rd gen AMD Ryzen Processors
    https://betanews.com/2019/07/07/system76-linux-thelio-amd-ryzen3/

    PyOxidizer Can Turn Python Code Into Apps for Windows, MacOS, Linux

    https://fossbytes.com/pyoxidizer-can-turn-python-code-apps-for-windows-macos-linux/

    Thousands of Android Apps Can Track Your Phone — Even if You Deny Permissions
    https://www.theverge.com/2019/7/8/20686514/android-covert-channel-permissions-data-collection-imei-ssid-location

    Anubis Android Banking Malware Returns with Extensive Financial App Hit List
    https://www.zdnet.com/article/anubis-android-banking-malware-returns-with-a-bang/

    Mozilla Firefox and the Nomination for Internet Villain Award
    https://itsfoss.com/mozilla-internet-villain/

    Ubuntu LTS Will Now Get the Latest Nvidia Driver Updates
    https://itsfoss.com/ubuntu-lts-latest-nvidia-drivers/

    Credits:
    Ubuntu “Complete” sound: Canonical
     
    Theme Music: From The Dust – Stardust

    https://soundcloud.com/ftdmusic
    https://creativecommons.org/licenses/by/4.0/

    The Fridge: Ubuntu Weekly Newsletter Issue 587

    $
    0
    0

    Welcome to the Ubuntu Weekly Newsletter, Issue 587 for the week of July 7 – 13, 2019. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License


    Balint Reczey: Introducing ubuntu-wsl, the package making Ubuntu better and better on WSL

    $
    0
    0

    The Ubuntu apps for the Windows Subsystem for Linux provide the very same packages you can find on Ubuntu servers, desktops, cloud instances and containers, and this ensures maximal compatibility with other Ubuntu installations. Until recently there was little work done to integrate Ubuntu with the Windows system running the WSL environment, but now this is changing.

    In Ubuntu metapackages collect packages useful for a common purpose by depending on them and ubuntu-wsl is the new metapackage to collect integration packages to be installed on every Ubuntu WSL system. It pulls in wslu, “A collection of utilities for WSL” to let you create shortcuts on the Windows desktop with wslusc, start the default Windows browser with wslview, and do a few other things:

    With updates to the ubuntu-wsl metapackage we will add new features to Ubuntu WSL installations to make them even more comfortable to use, thus if you have an older installation please install the package manually:

    sudo apt update
    sudo apt install ubuntu-wsl

    Oh, and one more thing, you can even set up sound and run graphical apps if you make a few manual steps. For details check out https://wiki.ubuntu.com/WSL!

    Ubucon Europe 2019: Call for Sponsors

    $
    0
    0

    Corporate sponsorships

    This event can only be possible thanks to our sponsors. Your investment helps us create a greater experience for the open source community, while you still benefit from a considerable amount of exposure.

    If you are interested in sponsoring the event, please view the packages offered below and get in touch with us (the document describes how to do so).

    Individual sponsorships

    Individual sponsorships are donations made by individuals help this Ubucon happen as well. Individual sponsors will not be provided with free tickets but will be highlighted on the website and during the event. Donate by clicking here

    Ubucon Europe 2019: Our Diamond Sponsor – Ubuntu!

    $
    0
    0

    Our Diamond Sponsor of this event is Ubuntu, an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.

    Linux was already established in 2004, but it was fragmented into proprietary and unsupported community editions, and free software was not a part of everyday life for most computer users. That’s when Mark Shuttleworth gathered a small team of Debian developers who together founded Canonical and set out to create an easy-to-use Linux desktop called Ubuntu.
    However, the governance of Ubuntu is somewhat independent of Canonical, with volunteer leaders from around the world taking responsibility for many critical elements of the project. Mark Shuttleworth, as project founder, short-lists public nominees as candidates for the Community Council and Technical Board, and they in turn screen and nominate candidates for a wide range of boards, councils and teams that take responsibility for aspects of the project.

    Thanks to them, we have received a significant support to sustain our event and our journey to give you one of the best open source experiences in Sintra.

    What to jump onboard as well?
    Visit our Call for Sponsor post for more information.

    Daniel Pocock: Google, Money and Censorship in Free Software communities

    $
    0
    0

    On 30 June 2019, I sent the email below to the debian-project mailing list.

    It never appeared.

    Alexander Wirt (formorer) has tried to justify censoring the mailing list in various ways. Wirt has multiple roles, as both Debian mailing list admin and also one of Debian's GSoC administrators and mentors. Google money pays for interns to do work for him. It appears he has a massive conflict of interest when using the former role to censor posts about Google, which relates to the latter role and its benefits.

    Wirt has also made public threats to censor other discussions, for example, the DebConf Israel debate. In that case he has wrongly accused people of antisemitism, leaving people afraid to speak up again. The challenges of holding a successful event in that particular region require a far more mature approach, not a monoculture.

    Why are these donations and conflicts of interest hidden from the free software community who rely on, interact with and contribute to Debian in so many ways? Why doesn't Debian provide a level playing field, why does money from Google get this veil of secrecy?

    Is it just coincidence that a number of Google employees who spoke up about harassment are forced to resign and simultaneously, Debian Developers who spoke up about abusive leadership are obstructed from competing in elections? Are these symptoms of corporate influence?

    Is it coincidence that the three free software communities censoring my recent blog about human rights from their Planet sites (FSFE, Debian and Mozilla, evidence of censorship) are also the communities where Google money is a disproportionate part of the budget?

    Could the reason for secrecy about certain types of donation be motivated by the knowledge that unpleasant parts of the donor's culture also come along for the ride?

    The email the cabal didn't want you to see

    Subject: Re: Realizing Good Ideas with Debian Money
    Date: Sun, 30 Jun 2019 23:24:06 +0200
    From: Daniel Pocock <daniel@pocock.pro>
    To: debian-project@lists.debian.org, debian-devel@lists.debian.org
    
    
    
    On 29/05/2019 13:49, Sam Hartman wrote:
    > > [moving a discussion from -devel to -project where it belongs]> >>>>>> "Mo" == Mo Zhou <lumin@debian.org> writes:> >     Mo> Hi,>     Mo> On 2019-05-29 08:38, Raphael Hertzog wrote:>     >> Use the $300,000 on our bank accounts?> > So, there were two $300k donations in the last year.> One of these was earmarked for a DSA equipment upgrade.
    
    
    When you write that it was earmarked for a DSA equipment upgrade, do you
    mean that was a condition imposed by the donor or it was the intention
    of those on the Debian side of the transaction?  I don't see an issue
    either way but the comment is ambiguous as it stands.
    
    Debian announced[1] a $300k donation from Handshake foundation.
    
    I couldn't find any public disclosure about other large donations and
    the source of the other $300k.
    
    In Bits from the DPL (December 2018), former Debian Project Leader (DPL)
    Chris Lamb opaquely refers[2] to a discussion with Cat Allman about a
    "significant donation".  Although there is a link to Google later in
    Lamb's email, Lamb fails to disclose the following facts:
    
    - Cat Allman is a Google employee (some people would already know that,
    others wouldn't)
    
    - the size of the donation
    
    - any conditions attached to the donation
    
    - private emails from Chris Lamb indicated he felt some pressure,
    influence or threat from Google shortly before accepting their money
    
    The Debian Social Contract[3] states that Debian does not hide our
    problems.  Corporate influence is one of the most serious problems most
    people can imagine, why has nothing been disclosed?
    
    Therefore, please tell us,
    
    1. who did the other $300k come from?
    2. if it was not Google, then what is the significant donation from Cat
    Allman / Google referred[2] to in Bits from the DPL (December 2018)?
    3. if it was from Google, why was that hidden?
    4. please disclose all conditions, pressure and influence relating to
    any of these donations and any other payments received
    
    Regards,
    
    Daniel
    
    
    1. https://www.debian.org/News/2019/20190329
    2. https://lists.debian.org/debian-devel-announce/2018/12/msg00006.html
    3. https://www.debian.org/social_contract
    

    Censorship on the Google Summer of Code Mentor's mailing list

    Google also operates a mailing list for mentors in Google Summer of Code. It looks a lot like any other free software community mailing list except for one thing: censorship.

    Look through the "Received" headers of messages on the mailing list and you can find examples of messages that were delayed for some hours waiting for approval. It is not clear how many messages were silently censored, never appearing at all.

    Recent attempts to discuss the issue on Google's own mailing list produced an unsurprising result: more censorship.

    However, a number of people have since contacted me personally about their negative experiences with Google Summer of Code. I'm publishing below the message that Google didn't want you to see.

    Subject: [GSoC Mentors] discussions about GSoC interns/students medical status
    Date: Sat, 6 Jul 2019 10:56:31 +0200
    From: Daniel Pocock <daniel@pocock.pro>
    To: Google Summer of Code Mentors List <google-summer-of-code-mentors-list@googlegroups.com>
    
    
    Hi all,
    
    Just a few months ago, I wrote a blog lamenting the way some mentors
    have disclosed details of their interns' medical situations on mailing
    lists like this one.  I asked[1] the question: "Regardless of what
    support the student received, would Google allow their own employees'
    medical histories to be debated by 1,000 random strangers like this?"
    
    Yet it has happened again.  If only my blog hadn't been censored.
    
    If our interns have trusted us with this sensitive information,
    especially when it concerns something that may lead to discrimination or
    embarrassment, like mental health, then it highlights the enormous trust
    and respect they have for us.
    
    Many of us are great at what we do as engineers, in many cases we are
    the experts on our subject area in the free software community.  But we
    are not doctors.
    
    If an intern goes to work at Google's nearby office in Zurich, then they
    are automatically protected by income protection insurance (UVG, KTG and
    BVG, available from all major Swiss insurers).  If the intern sends a
    doctor's note to the line manager, the manager doesn't have to spend one
    second contemplating its legitimacy.  They certainly don't put details
    on a public email list.  They simply forward it to HR and the insurance
    company steps in to cover the intern's salary.
    
    The cost?  Approximately 1.5% of the payroll.
    
    Listening to what is said in these discussions, many mentors are
    obviously uncomfortable with the fact that "failing" an intern means
    they will not even be paid for hours worked prior to a genuine accident
    or illness.  For 1.5% of the program budget, why doesn't Google simply
    take that burden off the mentors and give the interns peace of mind?
    
    On numerous occasions Stephanie Taylor has tried to gloss over this
    injustice with her rhetoric about how we have to punish people to make
    them try harder next year.  Many of our interns are from developing
    countries where they already suffer injustice and discrimination.  You
    would have to be pretty heartless to leave these people without pay.
    Could that be why Googlespeak clings to words like "fail" and "student"
    instead of "not pay" and "employee"?
    
    Many students from disadvantaged backgrounds, including women, have told
    me they don't apply at all because of the uncertainty about doing work
    that might never be paid.  This is an even bigger tragedy than the time
    mentors lose on these situations.
    
    Regards,
    
    Daniel
    
    
    1.
    https://danielpocock.com/google-influence-free-open-source-software-community-threats-sanctions-bullying/
    
    --
    Former Debian GSoC administrator
    https://danielpocock.com
    

    Canonical Design Team: Issue #2019.07.22 – Kubeflow and Conferences, 2019

    $
    0
    0
    • Kubeflow at OSCON 2019– Over 10 sessions! Covering security, pipelines, productivity, ML ops and more. Some of the sessions are led by end-users, which means you’ll get the real deal about using Kubeflow in your production solution
    • Kubeflow at KubeCon Europe 2019 in Barcelona– The top Kubeflow events from Kubecon in Barcelona, 2019. Tutorials, Pipelines, and Kubeflow 1.0 ruminations. The discussion on when Kubeflow will reach 1.0 should be of interest to those waiting for that milestone.
    • Kubeflow Contributor Summit 2019– Presentations and Slide decks, 22+ of them. Reviewing them will help you understand how the sausage is made. One of the interesting videos focuses on a panel discussion with machine learning practitioners and experts discussing the dynamics of machine learning at their workplace.
    • Kubeflow events calendar– Find a past or future event. This is a great resource for reviewing content from community leaders and leveling up on the current state of Kubeflow. If you are aware of something that is missing, feel free to add the content through github – become a community member! 
    • Use Case Spotlight: IBM’s photo-scraping scandal shows what a weird bubble AI researchers live in. This bubble is all about data – who owns it, who can monopolize it, who is monetizing it, and what the expectations around it. The expectations is the crux of the issue – people using the data may be at odds with the people supplying the data.

    The post Issue #2019.07.22 – Kubeflow and Conferences, 2019 appeared first on Ubuntu Blog.

    Viewing all 12025 articles
    Browse latest View live