David Mandelberg https://david.mandelberg.org Freelance cyber security consultant, software developer, and more. Wed, 29 Nov 2017 03:06:27 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.1 https://david.mandelberg.org/wp-content/uploads/2017/09/cropped-P1230630-raw-crop-square-full-32x32.jpg David Mandelberg https://david.mandelberg.org 32 32 Bailey at Nod Brook https://david.mandelberg.org/2017/11/29/bailey-at-nod-brook/ https://david.mandelberg.org/2017/11/29/bailey-at-nod-brook/#respond Wed, 29 Nov 2017 03:06:27 +0000 https://david.mandelberg.org/?p=2949 ]]> https://david.mandelberg.org/2017/11/29/bailey-at-nod-brook/feed/ 0 Waves of Sin https://david.mandelberg.org/2017/11/28/waves-of-sin/ https://david.mandelberg.org/2017/11/28/waves-of-sin/#respond Tue, 28 Nov 2017 18:21:47 +0000 https://david.mandelberg.org/?p=2939 What do you get when you combine a pseudo‐random number generator, the sine function, and some C++ code? In this case, an album of very strange music. With a POSIX shell script and ffmpeg thrown into the mix, you also get music videos.

graph paper with "Waves of Sin" written using mostly sine waves for letters, and "David Mandelberg" written by hand with a pencil
Cover art for Waves of Sin

The album opens with Signals of Sin and closes with Signals of Sin (reprise), because of course music needs telephony signals to frame it. (These are the only two tracks that are not pseudo‐randomly generated.)

Tracks 2–10 explore what happens when you use a pseudo‐random number generator to pick the number of simultaneous “notes,” and the duration, pitch, volume, and left–right pan of each note. To create variety, some tracks (e.g., Sparse Waves of Sin) have very few simultaneous notes, while others (Way Too Many Waves of Sin) have… more. Some tracks have normal‐length notes; Super Fast Waves of Sin and Hyper Fast Waves of Sin don’t. Some tracks include harmonics (e.g., Harmonic Waves of Sin), others have only fundamentals. And for the strangest tracks (Interfering Waves of Sin and Harmonic Interfering Waves of Sin), each note is represented by a spread of interfering frequencies, instead of a single fundamental frequency and optional harmonics.

Finally, the album would not be complete without the raw output of its pseudo‐random number generator, i.e., some white noise: Dedication to Mersenne Twister 19937, Without Which This Album Would Have Been Slightly Different.

I’m definitely not claiming that this is my new favorite form of music, or even that it’s particularly consonant, but after playing around with the code and parameters a bunch, I grew to actually enjoy this music. Unlike my previous album. That one just started to grate on me more and more.

]]>
https://david.mandelberg.org/2017/11/28/waves-of-sin/feed/ 0
Functional Video Generation https://david.mandelberg.org/2017/11/21/functional-video-generation/ https://david.mandelberg.org/2017/11/21/functional-video-generation/#respond Tue, 21 Nov 2017 20:01:41 +0000 https://david.mandelberg.org/?p=2925 Back in 2011–2012, I played around with generating static images from mathematical functions. Each function effectively took an (x,y) coordinate and returned an RGB tuple. (By mathematical functions, I mean deterministic functions that don’t rely on external input and have no side effects. I.e., given the same input parameters, the function will always return the same value, and do nothing other than return that value.) Some scaffolding code called the function repeatedly and performed anti‐aliasing to produce an image. It was fun to see what I could do just by mapping spatial coordinates to color values.

zone plate with blurry colors in the background black circles of mass, surrounded by colors to indicate the gravity field highly-saturated bright contour lines on a black background

Then earlier this year, I decided to add a time coordinate and make abstract, ambient videos using the same technique. The single‐threaded python code I used for still images was too slow to be practical for long videos, so I started from scratch with new C++ code and an improved interface. The video scaffolding code exposes the below interface for subclasses to define what the video will look like. (The use of ThreadState and TimeState does make the functional abstraction a bit less pure, but it’s important for performance to not re‐do some calculations for every sub‐pixel.) Once a subclass gives a function that defines what the video looks like in theory, the scaffolding code handles anti‐aliasing, motion blur, and rendering to turn it into a finite video that a computer can display.

// Main class to generate a video (or still frame). Subclass this. Use
// ThreadState for any information local to a thread, and TimeState for
// anything local to a single point in time. (There could be multiple
// TimeStates per frame if temporal oversampling is used.)
template <typename ThreadState = NullState, typename TimeState = NullState>
class VideoGenerator : public VideoGeneratorInterface {
…
 protected:
  // Get the color value at a single point in space and time. (x,y) are spatial
  // coordinates, with the smaller dimension in the range [-1,1]. For a 2:1
  // aspect ratio, x would be in the range [-2,2] and y in [-1,1]; for 1:2, x
  // would be in [-1,1] and y in [-2,2]. t is in seconds.
  virtual Rgb PointValue(
      const ThreadState* thread_state, const TimeState* time_state,
      float x, float y, double t) = 0;

  // Override this if the per-thread state shouldn't be null.
  virtual ::std::unique_ptr<ThreadState> GetThreadState() {
    return nullptr;
  }

  // Override this if the per-point-in-time state shouldn't be null.
  virtual ::std::unique_ptr<TimeState> GetTimeState(
      const ThreadState* thread_state, double t) {
    return nullptr;
  }
…
};

The first video I made was directly inspired by the last still image I’d made with the technique, and used a very similar function. For each time value t, three dimensional Perlin noise provides a map from (x,y,t) to a value that I used as the elevation of the (x,y) point on a topographic map. The elevation values are then used to make contour lines with hue denoting the elevation of the line and lightness denoting how close each point is to its nearest contour line. The code is only 43 lines long including boilerplate, and produces this 12 hour long video of gently moving colorful curves:

Next, I played around with interference patterns similar to moiré patterns, to try to generate something even more abstract than an abstracted topographic map. This code uses a set of overlapping, moving blinds to generate patterns of light and dark. The blinds use Perlin noise to independently vary their size and rotation, and to move side to side. Separately, the hue for the entire screen varies over time. Warning: the end result might cause motion sickness in some people. I tried my best to avoid it, but I’m not sure how well I succeeded.

So far, I’m finding the results of functional video generation interesting, though I do think that more traditional computer animation is a lot more versatile.

]]>
https://david.mandelberg.org/2017/11/21/functional-video-generation/feed/ 0
Exponential Vuvuzelas Release https://david.mandelberg.org/2017/10/13/exponential-vuvuzelas-release/ https://david.mandelberg.org/2017/10/13/exponential-vuvuzelas-release/#respond Fri, 13 Oct 2017 04:36:26 +0000 https://david.mandelberg.org/?p=2909 If you’re the type of person who always felt that your music collection just needed a few (hundred) more vuvuzelas, then today, you are in luck! Presenting a complete recording of Exponential Vuvuzelas, available for audio download and music video streaming today!

Image of many vuvuzelas in an exponential pattern, with title "Exponential Vuvuzelas" by David Mandelberg. Stickers on the image read: "Now featuring more vuvuzelas than you ever wanted to hear in your entire life!" / "The perfect gift for a friend you don’t like!" / "For best results, pair this quality recording with an even higher quality pair of noise‐reduction ear plugs."
Cover art for Exponential Vuvuzelas
  1. Exponential Vuvuzelas: Act 1, Crescendo: N. 1 Vuvuzela
  2. Exponential Vuvuzelas: Act 1, Crescendo: I. 2 Vuvuzelas
  3. Exponential Vuvuzelas: Act 1, Crescendo: II. 4 Vuvuzelas
  4. Exponential Vuvuzelas: Act 1, Crescendo: III. 8 Vuvuzelas
  5. Exponential Vuvuzelas: Act 1, Crescendo: IV. 16 Vuvuzelas
  6. Exponential Vuvuzelas: Act 1, Crescendo: V. 32 Vuvuzelas
  7. Exponential Vuvuzelas: Act 1, Crescendo: VI. 64 Vuvuzelas
  8. Exponential Vuvuzelas: Act 1, Crescendo: VII. 128 Vuvuzelas
  9. Exponential Vuvuzelas: Act 1, Crescendo: VIII. 256 Vuvuzelas
  10. Exponential Vuvuzelas: Act 1, Crescendo: IX. 512 Vuvuzelas
  11. Exponential Vuvuzelas: Act 1, Crescendo: X. 1024 Vuvuzelas
  12. Exponential Vuvuzelas: Act 2, Diminuendo: I. 1024–0 Vuvuzelas: “Outro”
  13. Exponential Vuvuzelas: Act 2, Diminuendo: N. 0 Vuvuzelas: “A much needed break for your ears”
  14. Bonus! All 37 Samples From Exponential Vuvuzelas, for Your Listening Agony

In addition to the music and videos, there’s also a score of the composition, the code used to turn 37 vuvuzela samples into 1024 simultaneous vuvuzelas, and the code used to generate the visual part of the music videos.

]]>
https://david.mandelberg.org/2017/10/13/exponential-vuvuzelas-release/feed/ 0
Exponential Vuvuzelas (Coming Soon) https://david.mandelberg.org/2017/10/10/exponential-vuvuzelas-coming-soon/ https://david.mandelberg.org/2017/10/10/exponential-vuvuzelas-coming-soon/#respond Tue, 10 Oct 2017 17:27:34 +0000 https://david.mandelberg.org/?p=2848 You know what the world really needs more of? Vuvuzela music. Yup. That’s totally a pressing issue in the world today. Well, I’m here to help with a new “musical” composition, Exponential Vuvuzelas. A high quality recording of this work will be released soon on my first ever full‐length album.

Exponential Vuvuzelas Score

Act 1, Crescendo

In the first act, the vuvuzela noisemusic starts gently, and keeps increasing as more and more vuvuzelas join in.

Movement N. 1 Vuvuzela

A lone vuvuzela plays repeatedly for some amount of time.

Movement I. 2 Vuvuzelas

The vuvuzela from the previous movement continues, and one more vuvuzela joins in. This continues for some amount of time.

Movement II. 4 Vuvuzelas

The vuvuzelas from the previous movement continue, and two more vuvuzelas join in. This continues for some amount of time.

Movement III. 8 Vuvuzelas

The vuvuzelas from the previous movement continue, and four more vuvuzelas join in. This continues for some amount of time.

Movement IV. 16 Vuvuzelas

The vuvuzelas from the previous movement continue, and eight more vuvuzelas join in. This continues for some amount of time.

Movement V. 32 Vuvuzelas

The vuvuzelas from the previous movement continue, and 16 more vuvuzelas join in. This continues for some amount of time.

Movement VI. 64 Vuvuzelas

The vuvuzelas from the previous movement continue, and 32 more vuvuzelas join in. This continues for some amount of time.

Movement VII. 128 Vuvuzelas

The vuvuzelas from the previous movement continue, and 64 more vuvuzelas join in. This continues for some amount of time.

Movement VIII. 256 Vuvuzelas

The vuvuzelas from the previous movement continue, and 128 more vuvuzelas join in. This continues for some amount of time.

Movement IX. 512 Vuvuzelas

The vuvuzelas from the previous movement continue, and 256 more vuvuzelas join in. This continues for some amount of time.

Movement X. 1024 Vuvuzelas

The vuvuzelas from the previous movement continue, and 512 more vuvuzelas join in. This continues for some amount of time.

Act 2, Diminuendo

In the second and thankfully, final, act, the vuvuzelas finally go away.

Movement I. 1024–0 Vuvuzelas: “Outro”

The vuvuzelas from the last movement in the previous act continue to play until they run out of breath, without starting up again. The movement ends when the last vuvuzela is done.

Movement N. 0 Vuvuzelas: “A much needed break for your ears”

All vuvuzelas remain silent. This lasts for as long as is needed for listeners to realize that the work is done.

]]>
https://david.mandelberg.org/2017/10/10/exponential-vuvuzelas-coming-soon/feed/ 0
Managing My Music Collection https://david.mandelberg.org/2017/10/04/managing-my-music-collection/ https://david.mandelberg.org/2017/10/04/managing-my-music-collection/#respond Wed, 04 Oct 2017 22:47:39 +0000 https://david.mandelberg.org/?p=2809 As my music collection has grown, I’ve cobbled together a handful of procedures for managing it from my Ubuntu desktop. This post is primarily for my own benefit so I don’t forget parts of it, but I’m publishing it in case it’s useful to anybody else. For background, the collection is currently at 12,320 tracks, and growing. The vast majority is from (in decreasing order) CDs, vinyl records, and digital downloads. My general strategy is to save as much of any originals as possible in a lossless format (currently, FLAC), and generate smaller, lossy copies of the music as needed. I rely heavily on MusicBrainz for all metadata.

Directory layout

  • archive: Loosely organized files that are not for listening directly, e.g., un‐split digitized vinyl records
  • master.rw: Well organized, master copy of the collection
  • master: Read-only view of master.rw
  • profiles: Various copies of the collection, derived from master

Getting data off of the original media

Ripping CDs

  1. Figure out what sort of disc it is, using cdrdao disk-info. Sometimes there are unlisted data tracks that this discovers.
  2. Use Sound Juicer to get the Disc ID to submit to MusicBrainz.
  3. Use Sound Juicer to extract FLAC files from the audio tracks, into archive/cd/artist-name/album-name. I manually changed its dconf setting for paranoia to ['fragment', 'overlap', 'scratch', 'repair'].
  4. In archive/cd/artist-name/album-name, run cdrdao read-toc d01s01.toc (replacing d01 with the appropriate disc number) to extract the table of contents for the audio session.
  5. If there are any other sessions, extract them by running cdrdao read-cd --session 2 --datafile d01s02.iso d01s02.toc, replacing the disc and session numbers as appropriate, and changing the data file’s extension if appropriate.
  6. If any of the extra sessions contain music or music videos, extract those to individual files.

Digitizing and splitting vinyl records

(This procedure can probably easily be adapted for tapes or other analog sources, but my experience so far is primarily with vinyl records.)

  1. Create a new directory archive/vinyl/album-name, and change into it.
  2. If there’s more than one disc, make a text file in the directory with a note about what order the sides will be digitized in. E.g., for an auto‐sequence album, note that the sides will be digitized in order of side number, not one disc at a time.
  3. If there’s anything else that would affect digitization, note it in a text file. E.g., note if the record is monophonic, or if it will need speed and pitch adjustments.
  4. Plug in the USB turntable, and run record-vinyl project.flac to start recording audio. (Before writing record-vinyl, I had tried Audacity and Ardour for this step. Audacity froze and crashed too often, and Ardour had occasional buffer under‐runs when I did anything else with the computer at the same time. It’s definitely possible that I could have gotten either of them to work better with more effort, but the script wasn’t hard to write.)
  5. For each side, place the side on the turntable, clean it, and play it. If there are any skips, make a text file in the directory with a list of every track that contains a skip.
  6. Stop record-vinyl.
  7. If there were any skips, use Audacity to clean them up, and save the result as a new file. If the pitch and speed need adjustment, do that and save the result as a new file. Do not down‐mix to mono yet, because it’s occasionally easier to split tracks with the fake stereo signal, due to more noise in one channel than the other. (I save the result as a new file instead of going straight to track splitting, to avoid relying on being able to read Audacity project files in the future if I ever want to make any changes.)

    screenshot of Audacity showing a vinyl record with a skipping section selected
    Skipping audio selected, before being removed
  8. Open the un‐split audio file in Audacity, to split it into individual tracks:
    1. Switch to spectrogram view. Drag the bottom of the track down to make it as vertically large as possible, while still leaving space for a small label track at the bottom. (I’ve found this makes it much easier to see the boundaries between tracks.)
    2. For each visible track boundary (which should show up on the spectrogram as background noise with no signal), select from the end of the boundary to the start of that track (which is either the end of the previous label, or the beginning of the disc side). Listen to about a second at a time at each end of the track to make sure the boundaries are at the right place, then create a label in the label track. Within each disc side, there should be no gaps between labels, and no overlapping labels.

      screenshot of Audacity showing a spectrogram of the boundary between two tracks, with one of the tracks selected
      View after creating a label for a track
    3. Compare labels against the printed track list, and adjust as needed. If there are multiple tracks listed in a place where there’s only one label, split that label into multiple new labels, using the printed track times, the audio, and the spectrogram as a guide. Merge any labels that are all within the same listed track into a single new label. If the track list doesn’t include times, look at the placement of gaps on the disc itself as a guide for the correct track lengths.
    4. Export the label track, since it’s a simple text format with all the relevant info for splitting.
    5. If needed, down‐mix the audio to mono.
    6. Export the audio from each label to individual files.

Downloading digital media

  1. Download the files to a subdirectory of archive.
  2. Leave the originals in archive, and make a copy for tagging and moving to master.rw.

Tagging music files and adding them to the collection

  1. Get a front cover image, potentially by scanning the cover art. For large cover art, e.g., of 12″ records, use Hugin to stitch together multiple scans.
  2. Make sure there’s a correct MusicBrainz release, either by adding a new one, or by using an existing one and fixing or completing it if needed. For a CD, attach the extracted Disc ID if needed. I’ve found m17n’s rfc1345 input method very helpful for typing all the punctuation (e.g., curly quotes, various dashes) and scripts (e.g., Cyrillic, Hebrew, Arabic, Greek) in my music collection, without needing to learn a bunch of different keyboard layouts.
  3. Add the more basic of my custom folksonomy tags to MusicBrainz: tag the release with added/YYYY/MM/DD to mark when I added it to my collection, and tag tracks with context/hidden-track/pregap, context/hidden-track/separated-by-silence, or context/hidden-track/unlisted as appropriate.
  4. Tag the music files with MusicBrainz Picard. When tagging files with no preexisting tags (e.g., from vinyl), be especially careful when matching files against tracks to tag.
  5. Use Ex Falso to add ReplayGain tags, and then move the files from archive to master.rw. The rename pattern I use for moving the files is /home/dseomn/Music/master.rw/<albumartistsort>/<album>/d<discnumber|<discnumber>|XX>t<tracknumber|<tracknumber>|XX>. <artist> - <title>.
  6. If any of the newly‐moved files have filenames longer than 251 bytes, shorten them to 251 bytes. (251 allows other copies of the collection to add .mp3 or .ogg at the end of the filename.)
  7. Move any non‐audio files (e.g., cover art, CD tables of contents, etc.) into the same directory as the music files.
  8. Run CoHydra with my configuration to generate copies in profiles from master. (This does things like ensuring consistent cover image filenames for media players that need that, filtering out files that media players don’t understand, creating a directory with only music videos, and recoding to lossy formats for devices with limited storage.)

After adding music

As soon as possible after adding new music, listen to it once through. For vinyl, pay attention to make sure that the audio corresponds to the track title, and the track boundaries make sense. For CDs, listen for errors that might be correctable by washing and re‐ripping the CD. After getting more acquainted with the music over time, come back to it to add more of my folksonomy tags, then add those tags to the files with Picard.

Every once in a while, run lint-analog-audio-rips to find vinyls that I started digitizing and forgot to finish. Also, scan the entire collection with Picard to pick up relevant changes in MusicBrainz data.

]]>
https://david.mandelberg.org/2017/10/04/managing-my-music-collection/feed/ 0
Secure Backups for the Long Term https://david.mandelberg.org/2017/09/18/secure-backups-for-the-long-term/ https://david.mandelberg.org/2017/09/18/secure-backups-for-the-long-term/#respond Mon, 18 Sep 2017 18:34:59 +0000 https://david.mandelberg.org/?p=2804 I’ve been thinking a lot about backups recently. One of my multiple drives started showing signs of failure, and I had already been meaning to upgrade my backups from an rsync‐based system with no support for multiple snapshots, to something better. Of the many backup systems I’ve looked at so far, some have come relatively close to what I think backup software should do, but none are close enough. Does anybody want to help me start a company to make better open source backup software? I have some ideas for monetization, and here’s a first stab at some principles for the software:

Principles

Primary principles

Multiple untrusted tenants, single trusted service

One backup system must be able to provide backup services to multiple clients. The service should be able to support diverse types of clients, including different operating systems and different backup schedules. A client must not be required to trust any other client with anything, but it must trust the backup service with its data.

Additionally, it must be possible to configure the service so that a client does not need to trust itself in the future. E.g., it must be possible to have clients that can add new data without also being able to delete or modify any of their own data. This prevents an attacker who compromises a client from being able to compromise backups made from that client prior to the compromise.

Durability

Backed up data should outlive any people involved in creating it. There must not be any single points of failure in accessing (but not necessarily in adding) backed up data. It must be possible to maintain offline copies of backups in relatively inaccessible and secure locations. It must be possible to maintain online copies in places that can’t run custom software, e.g., cloud storage. It must be possible to access data long after this software stops being maintained.

Any data format changes must include upgrade paths. Wherever cryptographic, compression, chunking, or other algorithms are used in ways that affect stored data, there must be a clear story for how future versions of the software can transition to newer algorithms as needed. Wherever cryptographic keying material is used, it must be possible to gradually and securely roll over to new keying material.

All software needed for reading backed up data, including all dependencies, must have source code available. Whenever possible, source code and documentation should be stored along with backed up data, to maximize the chance that data will be recoverable after this software stops being maintained. Additionally, data formats and algorithms should be chosen with the intent of making data access without a working copy of this software as easy as possible.

Integrity

Data must be protected against interrupted processes, power failures, bit rot, or any other form of accidental corruption. Any attempt to access data must either succeed with the correct data, or fail loudly.

Authenticity

It must be possible to verify the authenticity of stored data. It must be possible to detect inauthentic modifications to authentic data, and addition of inauthentic data. It should be possible to detect replay of previously authentic data and deletion of authentic data.

A client must authenticate to a server before the client is authorized to perform any backup operations. A server must authenticate to a client before the client trusts the server with its data. Data in transit between the client and server must be verifiable.

Confidentiality

It must be possible to encrypt stored data. [TODO: Get a better understanding of how to maintain confidentiality while also supporting incremental transfer of changed data, and update this section. Consider using random padding. Don’t forget about untrusted clients who can inject arbitrary cleartext.]

If availability and durability of data is valued over confidentiality, it must be possible to disable encryption of stored data. Additionally, it should be possible to selectively enable/disable encryption for some storage media but not others.

Data in transit must be encrypted using authenticated, ephemeral key agreement.

Monitoring

It must be possible to monitor the system and receive alerts about any potential problems.

Data integrity should be monitored by reading all stored data on a periodic basis, and alerting on any issues. Read or write errors during normal operation should also trigger alerts. Documentation should strongly recommend SMART monitoring, or any other sensible monitoring that might catch potential integrity or availability issues.

Per-client monitoring should be possible. E.g., the service could alert if a client has not made a backup recently enough. Ideally, it would also be possible to specify per-client backup tests that would trigger an alert on failure. E.g., a test could check that a backup contains a MySQL dump file with a timestamp close to the timestamp of the overall backup.

Per-data-copy monitoring should be possible. E.g., the service could alert if an offline medium has not been updated or verified recently enough.

Healthy ecosystem

The ecosystem around the backup software should be healthy. The more people who use and depend on it, the more likely backups will continue to be readable for longer without resorting to code archeology. The more import and export tools, the better. The more supported types of clients, the better. The more supported deployment environments, the better.

Secondary principles

These principles should be followed, but they must not interfere with any primary principle. E.g., efficient network transfers must not enable one client to use the backup system as an oracle to violate the confidentiality of another client.

Scalability

Ideally, the service should be able to run in a distributed mode. It should be possible to add new data to any server, and have that data propagate to all other servers. It should be possible to manage offline storage and cloud storage copies from any server.

Potentially, the service should be able to run in a multi-layer distributed mode, with distinct clusters where no single failure within a cluster affects operation of that cluster, and no single failure of an entire cluster affects operation of the entire system. E.g., a cluster could have multiple storage servers and multiple access servers. Each cluster could have an eventually-consistent, complete copy of all data, striped-with-parity across multiple storage servers. Any access server could operate on data within its cluster, replicate data to/from an access server in another cluster, and accept new backups from clients.

Efficiency

Resources should be used efficiently. Data should be deduplicated. Unchanged data shouldn’t be sent over the network unnecessarily. It should be possible to specify a retention policy for when old data is deleted. Resuming an interrupted transfer should be possible.

Ease of use

It should be easy to add or remove a client, to swap drives, etc. Ideally, initial setup would also be easy. There should be a mode where the client and server are bundled together.

Diverse storage media

Hard drives, solid state drives, and cloud storage should be the primary targets. Tapes, optical media, and any other reasonable storage media should also be supported, at least to a limited extent. For example, it might be reasonable to require low seek latency for creating a new backup, but support bulk copying to tapes or optical media after the backup is created. To the extent possible, all media should support integrity verification in a single pass, and extraction of a complete set of or subset of data in a single pass.

]]>
https://david.mandelberg.org/2017/09/18/secure-backups-for-the-long-term/feed/ 0
This Should Never Happen https://david.mandelberg.org/2017/09/04/this-should-never-happen/ https://david.mandelberg.org/2017/09/04/this-should-never-happen/#respond Mon, 04 Sep 2017 20:49:17 +0000 https://david.mandelberg.org/?p=2793 Have you ever wanted to write code that you could never get away with in a respectable environment? In that case, I have just the repository for you, https://github.com/this-should-never-happen/this-should-never-happen. Pull requests of any quality are welcome! I started it out with a few really bad Hello World examples, some funsafe optimizations, and a regular expression‐based HTML parser.

]]>
https://david.mandelberg.org/2017/09/04/this-should-never-happen/feed/ 0
A Month of Learning New‐to‐Me Things https://david.mandelberg.org/2017/09/02/a-month-of-learning-new-to-me-things/ https://david.mandelberg.org/2017/09/02/a-month-of-learning-new-to-me-things/#respond Sat, 02 Sep 2017 21:01:09 +0000 https://david.mandelberg.org/?p=2756 As mentioned previously, I spent the past month learning a bunch of new‐to‐me things, with the goal of potentially opening up freelancing opportunities I hadn’t considered yet.

I started by getting my first ever smartphone. Yes, yes, I know. What sort of techie makes it all the way to 2017 without having ever had a smartphone. It just never seemed important to have one before now. And to be fair, I had done some Android programming on work‐provided phones a couple of years ago, so I wasn’t completely unfamiliar with them.

This led to learning more about phone bootloaders, low‐level Android components, and firmware, as I installed LineageOS and struggled not to brick my phone. Luckily it all worked out fine after a couple of missteps and about 6 operating system reinstalls, some of which were caused by my initial misunderstanding of how Android handles disk encryption.

Once I had a functional phone with Lineage, I moved on to setting up an instance of Nextcloud so I could sync my contacts, calendars, files, etc. using my own server. It definitely feels rough around the edges in a few places, but overall I’m really impressed with the Nextcloud core and its many available apps. It’s pretty amazing to be able to add something to my grocery list on my computer, then use my phone to access it at the store, without manually copying anything.

Once I got a data plan, I read up on mobile data security, found it disappointing, and set up redundant VPNs. It was more difficult than I expected to configure the VPN servers to support only 256‐bit crypto with ephemeral key agreement, but I eventually got it all working with the cipher suites I wanted. I wonder if there’s any market for pre‐built home VPN devices? That might be a fun business to start.

As I mentioned in my initial post about freelancing, I’m thinking of improving open source projects’ support for multi‐factor authentication. But before jumping on the FIDO U2F bandwagon, I wanted to understand it better, so I spent some time reading. The specification looks really well thought out, so I now feel confident that U2F is the right direction to push. Native U2F support in SSH would be really nice, so I might start there.

At one point, I started working on business cards, but got stuck deciding what link to put on them. My old homepage was not very good, and I also didn’t want a really long direct link to my résumé. So I took a roughly two week break to learn some modern web design and set up this new website. Now I have a bit of a crush on flexbox and some of the new CSS length units that didn’t exist the last time I looked at the specifications.

]]>
https://david.mandelberg.org/2017/09/02/a-month-of-learning-new-to-me-things/feed/ 0
Freelancing https://david.mandelberg.org/2017/09/02/freelancing/ https://david.mandelberg.org/2017/09/02/freelancing/#respond Sat, 02 Sep 2017 20:03:25 +0000 https://david.mandelberg.org/?p=2747 About a month ago, I quit my job to start freelancing. Or alternatively, to run an experiment, go on an adventure, take a sabbatical, or start a business. I’m still figuring out how much of each it’s going to be. My main goal is to try a bunch of different things that appeal to me, while being open to any opportunities that appear in the process.

Some of the things I want to try are closely aligned with my work experience. For example, I once found a security issue in a protocol designed to secure all inter-domain routing on the Internet, and I’m a member of the IETF’s Security Directorate; cyber security consulting is a clear next step. I like analyzing complex systems for potential security issues, and working to mitigate or resolve those issues. I’m also considering various bug bounties and Google Patch Rewards. It would be really nice to use the Patch Rewards program, or possibly other funding sources, to increase support for multi‐factor authentication in open source projects.

Other things I want to try are slightly more outrageous. For example, apparently some people make money by recording themselves playing video games; I want to try something a little different, speedrun videos of some of my favorite logic puzzles. Or maybe I’ll make a YouTube channel about cryptography and other assorted topics.

I also want to spend more time learning new‐to‐me things, in order to see if any of these things lead to new opportunities I haven’t even considered yet. This is most of what I’ve been doing for the past month, and it will get its own follow‐up post shortly. Edit 2017-09-02: added link to follow‐up post.

I’m looking forward to seeing where this freelancing/experiment/adventure/sabbatical/entrepreneurship plan goes.

]]>
https://david.mandelberg.org/2017/09/02/freelancing/feed/ 0