Powered by RND
PodcastsTechnologieHacker Public Radio
Luister naar Hacker Public Radio in de app
Luister naar Hacker Public Radio in de app
(2.067)(250 021)
Favorieten opslaan
Wekker
Slaaptimer

Hacker Public Radio

Podcast Hacker Public Radio
Hacker Public Radio
Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic...

Beschikbare afleveringen

5 van 10
  • HPR4342: How I use Git to blog on the web and gopherspace
    This show has been flagged as Clean by the host. First, I create a Git repository some place on the server. This is the Git repo that's going to be populated with your content, but it doesn't have to be in a world-viewable location on your server. Instead, you can place this anywhere, and then use a Git hook or a cronjob to copy files from it to a world-viewable directory. I don't cover that here. I refer to this location as the staging directory. Next, create a bare repository on your server. In its hooks directory, create a shell script called post-receive: #!/usr/bin/bash # while read oldrev newrev refname do BR=`git rev-parse --symbolic --abbrev-ref $refname` if [ "$BR" == "master" ]; then WEB_DIR="/my/staging/dir" export GIT_DIR="$WEB_DIR/.git" pushd $WEB_DIR > /dev/null git pull popd > /dev/null fi done Now when you push to your bare repository, you are triggering the post-receive script to run, which in turn triggers a git pull in your staging directory. Once your staging directory contains the content you want to distribute, you can copy them to live directories, or you could make your staging directory live (remember to exclude the .git directory though), or whatever you want. For gopher, I create a file listing by date using a shell script: #!/usr/bin/bash SED=/usr/bin/sed DIR_BASE=/my/live/dir DIR_LIVE=blog DIR_STAGING=staging DATE=${DATE:-`date --rfc-3339=date`} for POST in `find "$DIR_BASE"/"$DIR_STAGING" \ -type f -name "item.md" -exec grep -Hl "$DATE" {} \;`; do POSTDIR=`dirname "$POST"` cp "$POST" "$DIR_BASE"/"$DIR_LIVE"/`basename $POSTDIR`.txt echo -e 0Latest'\t'../"$DIR_LIVE"/`basename $POSTDIR`.txt > /tmp/updater.tmp echo -e 0"$DATE" `basename $POSTDIR`'\t'../"$DIR_LIVE"/`basename $POSTDIR`.txt \ >> /tmp/updater.tmp "${SED}" -i "/0Latest/ r /tmp/updater.tmp" "$DIR_BASE"/date/gophermap "${SED}" -i '0,/0Latest/{/0Latest/d;}' "$DIR_BASE"/date/gophermap /usr/bin/rm /tmp/updater.tmp done Provide feedback on this episode.
    --------  
  • HPR4341: Transferring Large Data Sets
    This show has been flagged as Clean by the host. Transferring Large Data Sets Very large data sets present their own problems. Not everyone has directories with hundreds of gigabytes of project files, but I do, and I assume I'm not the only one. For instance, I have a directory with over 700 radio shows, many of these directories also have a podcast, and they also have pictures and text files. Doing a properties check on the directory I see 450 gigabytes of data. When I started envisioning Libre Indie Archive I wanted to move the directories into archival storage using optical drives. My first attempt at this didn't work because I lost metadata when I wrote the optical drives since optical drives are read only. After further work and study I learned that tar files can preserve meta data if they are created and uncompressed as root. In fact, if you are running tar as root preserving file ownership and permissions is the default. So this means that optical drives are an option if you write tar archives onto the optical drives. I have better success rates with 25 GB Blue Ray Discs than with the 50 GB discs. So, if your directory breaks up into projects that fit on 25 GB discs, that's great. My data did not do this easily but tar does have an option to write a data set to multiple tar files each with a maximum size, labelling them -0 -1, etc. When using this multi volume feature you cannot use compression. So you will get tar files, not tar.gz files. It's better to break the file sets up in more reasonable sizes so I decided to divide the shows up alphabetically by title, so all the shows starting with the letter a would be one data set and then down the alphabet, one letter at a time. Most of the letters would result in a single tar file labeled -0 that would fit on the 25 GB disc. Many letters, however, took two or even three tar files that would have to be written on different disks and then concatenated on the primary system before they are extracted to the correct location in primaryfiles. There is a companion program to tar, called tarcat, that I used to combine 2 or 3 tar files split by length into a single tar file that could be extracted. I ran engrampa as root to extract the files. So, I used a tar command on the working system where my Something Blue radio shows are stored. Then I used K3b to burn these files onto a 25 GB Blu Ray Disc carefully labeling the discs and writing a text file that I used to keep up with which files I had already copied to Disc. Then on the Libre Indie Archive primary system I copied from the Blu Ray to the boot drive the file or files for that data set. Then I would use tarcat to combine the files if there was more than one file for that data set. And finally I would extract the files to primaryfiles by running engrampa as root. Now I'm going to go into details on each of these steps. First make sure that the Libre Indie Archive program, prep.sh, is in your home directory on your workstation. Then from the data directory to be archived, in my case the something_blue directory run prep.sh like this. ~/prep.sh This will create a file named IA_Origin.txt that lists the date, the computer and directory being archived, and the users and userids on that system. All very helpful information to have if at some time in the future you need to do a restore. Next create a tar data set for each letter of the alphabet. (You may want to divide your data set in a different way.) Open a terminal in the same directory as the data directory, my something_blue directory, so that ls displays something_blue (your data directory). I keep the Something Blue shows and podcasts in subdirectories in the something_blue directory. Here's the tar command. Example a: sudo tar -cv --tape-length=20000000 --file=somethingblue-a-{0..50}.tar /home/larry/delta/something_blue/a* This is for the letter a so the --file parameter includes the letter a. The numbers 0..50 in the squirelly brackets are the sequence numbers for the files. I only had one file for the letter a, somethingblue-a-0.tar. The last parameter is the source for the tar files, in this case /home/larry/delta/something_blue/a* All of the files and directories in the something_blue directory that start with the letter a. You may want to change the --tape-length parameter. As listed it stores up to 19.1 GB. The maximum capacity of a 25 GB Blu-ray is 23.3GB for data storage. Example b: For the letter b, I ended up with three tar files. somethingblue-b-0.tarsomethingblue-b-1.tarsomethingblue-b-2.tar I will use these files in the example below using tarcat to combine the files. I use K3b to burn Blu-Ray data discs. Besides installing K3b you have to install some other programs and then there is a particular setup that needs to be done including selecting cdrecord and no multisession. Here's an excellent article that will go step by step through the installation and setup. How to burn Blu-ray discs on Ubuntu and derivatives using K3b? https://en.ubunlog.com/how-to-burn-blu-ray-discs-on-ubuntu-and-derivatives-using-k3b/ I also always check Verify data and I use the Linux/Unix file system, not windows which will rename your files if the filenames are too long. I installed a Blu-Ray reader into the primary system and I used thunar to copy the files from the Blu-Ray Disc to the boot drive. In the primaryfiles directory I make a subdirectory, something_blue, to hold the archived shows. If there is only one file, like in example a above, you can skip the concatenation step. If there is more than one file, like Example b above, you use tarcat to concatenate these files into one tar file. You have to do this. If you try to extract from just one of the numbered files when there is more than one you will get an error. So if I try to extract from somethingblue-b-0.tar and I get an error it doesn't mean that there's anything wrong with that file. It just has to be concatenated with the other b files before it can be extracted. There is a companion program to tar called tarcat that should be used to concatenate the tar files. Here's the command I used for example b, above. tarcat somethingblue-b-0.tar somethingblue-b-1.tar somethingblue-b-2.tar > sb-b.tar This will concatenate the three smaller tar files into one bigger tar file named sb-b.tar In order to preserve the meta data you have to extract the files as root. In order to make it easier to select the files to be extracted and where to store them I use the GUI archive manager, engrampa. To run engrampa as root open a terminal with CTRL-ALT t and use this command sudo -H engrampa Click Open and select the tar file to extract. Then follow the path until you are in the something_blue directory and you are seeing the folders and files you want to extract. Type Ctrl a to select them all. (instead of the something_blue directory you will go to your_data directory) Then click Extract at the top of the window. Open the directory where you want the files to go. In my case, primaryfiles/something_blue Then click Extract again in the lower right. After the files are extracted go to your data directory in primaryfiles and check that the directories and files are where you expect them to be. You can also open a terminal in that directory and type ls -l to review the meta data. When dealing with data chunks sized 20 GB or more each one of these steps takes time. The reason I like using an optical disk backup to transfer the files from the working system to Libre Indie Archive is because it gives me an easy to store backup that is not on a spinning drive and that cannot be overwritten. Still optical disk storage is not perfect either. It's just another belt to go with your suspenders. Another way to transfer directories into the primaryfiles directory is with ssh over the network. This is not as safe as using optical disks and it also does not provide the extra snapshot backup. It also takes a long time but it is not as labor intensive. After I spend some more time thinking about this and testing I will do a podcast about transferring large data sets with ssh. Although I am transferring large data sets to move them into archival storage using Libre Indie Archive there are many other situations where you might want to move a large data set while preserving the meta data. So what I have written about tar files, optical discs, and running thunar and engrampa as root is generally applicable. As always comments are appreciated. You can comment on Hacker Public Radio or on Mastodon. Visit my blog at home.gamerplus.org where I will post the show notes and embed the Mastodon thread for comments about thie podcast. Thanks Provide feedback on this episode.
    --------  
  • HPR4340: Playing Civilization IV, Part 7
    This show has been flagged as Clean by the host. Civilization IV added some new Victory types, and I decided to illustrate one of them, the Culture victory, by going through an example of achieving this, the Culture victory. Links: https://civilization.fandom.com/wiki/Speed_(Civ4) https://civilization.fandom.com/wiki/Cottage_(Civ4) https://www.palain.com/gaming/civilization-iv/playing-civilization-iv-part-7/ Provide feedback on this episode.
    --------  
  • HPR4339: Review of the YR01 smart lock
    This show has been flagged as Clean by the host. This episode gives a mini-review of the Yamiry YR01 Fingerprint Smart Knob. This key less entry system replaces your door handles and latch with a door handle and latch system that allows for multiple ways to 'keylessly' unlock your door via fingerprint, pin codes, bluetooth fobs, your phone's bluetooth, or your phone's wifi. References: Yamiry Fingerprint Smart Knob - Keyless Entry Digital Lock for Front Door (https://www.amazon.com/Smart-Door-Handle-Lock-Keypad/dp/B0C66NCTXX) NICE Digi (https://nice-digi.com/) Provide feedback on this episode.
    --------  
  • HPR4338: 328eforth
    This show has been flagged as Clean by the host. Review of the book the Arduino controlled by eforth by dr chen-hanson ting published in 2018 written by chen-hanson ting Late Dr. ting was a chemist turned engineer. he earned a phd in chemistry at the U of Chicago in 1965. taught chemistry in Taiwan until 1975. became a firmware engineer until hI retirement in 2000. he was a forth advocate for more than 50 years, especially a forth called eforth that has been ported to many devices, including the micro chip atmega 328 found on the arduino uno board. I found this book while searching for forths for the arduino uno boards. the source code and documentation for eforth is available in a lot of places I will put a few links in the show notes. I believe I mentioned this forth in an earlier hpr where I talked about choosing a forth. forth interest group https://forth.org https://wiki.forth-ev.de https://chochain.github.io (pdf) When I first encountered dr tings forth for arduino I was interested for one reason, it was easily assembled using avra, the gnu port of the atmel assembler. this was nice because using atmels (now microchips) assemblers on Linux required installing wine and installing wine, in the past, on a 64 bit Slackware meant installing 32 bit libraries to have a multI lib Slackware. ( that not an issue now). assembling the forth code in avra is quick, its only a little bit over 5k in size in the end. After playing with eforth for a while I became frustrated because I could create new words in the dictionary and the examples ran fine, but nothing persisted across reboot. so I dropped eforth and ended up using flashforth, which is a great, robust full featured forth. I still recommend flashforth if your starting out with forth on a microcontroller its solid software with good documentation. At the end of last year I thought it would be fun to write my own forth. and after looking into doing that I revisited 328eforth and thought, no how about I fix the problems with eforth on the arduino. so I dug out the book and began reading. Jones forth port at https://ratfactor.com/nasmjf The book has 6 parts. part 1 is dr tings musings on how he ended up creating 328eforth. part 2 explains installing eforth. the 3rd part begins exercising the arduino board using forth in the interactive interpreter. part 4 explains 328eforth implementation and design decisions. part 5 is the full commented source code of 328eforth and, this is the best part, dr tings explanation of what is going on in the code broken down by functional sections. a gold mine of information! part 6 conclusions The last part is his conclusions and examples to learn forth. This is a great free software project. nothing is hidden. it is accessible to anybody who would take the time to read and dig into the code. its makes assembly language much less dark and foreboding. I'll finish by reading a couple of paragraphs from dr tings book dr ting concludes: People using computers are trained to be slaves. You are taught to push certain buttons, and your are taught to push certain keys. Then, you get employed to push buttons and keys to work as slaves. Computers, programming languages, and operating systems are made complicated to enslave people. Computers are not complicated beyond comprehension. Programming languages and operating systems do not have to be complicated. If you get a sharp knife, you can be the master of your destination. 328eforth is a sharp knife. Go use it. The hacker ethos. The next podcast I produce will cover installing eforth on an arduino board and solving that pesky loss of words between boots problem. Provide feedback on this episode.
    --------  

Meer Technologie podcasts

Over Hacker Public Radio

Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
Podcast website

Luister naar Hacker Public Radio, Hard Fork en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.11.0 | © 2007-2025 radio.de GmbH
Generated: 3/25/2025 - 9:54:46 AM