Sam Nabi https://samnabi.com/blog Kirby Sat, 23 Nov 2024 20:18:48 +0000 <![CDATA[How I made a public archive of my Twitter posts]]> https://samnabi.com/blog/how-i-made-a-public-archive-of-my-twitter-posts https://samnabi.com/blog/how-i-made-a-public-archive-of-my-twitter-posts Tue, 19 Nov 2024 00:00:00 +0000 It's past time to stop giving any oxygen to the ongoing dumpster fire that is Twitter/X. For those of us who joined in the early days, it's sad to walk away from the platform that had been a touchpoint for political organizing and local community-building. But it's not that place anymore, and the sooner we build a home elsewhere, the better (cough, spore.social).

For old times' sake, though, let's start with a trip down memory lane. I joined Twitter in 2009 at the urging of my roommate Isabel. I first approached it as a way to promote my music, and I still wrote posts starting with "is", in the format of OG Facebook status updates: "@samnabi is pumped to play at the circus room on thursday!"

In 2010, I worked as a co-op student in the Ontario government. Canadian politicians like Tony Clement, and journalists like Kady O'Malley and Paul Wells were using Twitter to break news quicker and have direct conversations with regular people. My boss asked me, as the resident millennial, to teach the rest of the team what Twitter was all about. There were already some other government departments using it reliably!

tweet from MBower01, @mnrcentral trying to figure out if there is still a fire-ban in northern ontario. Can you help?

I remember when Twitter had to change their trending topics algorithm away from simply showing the most talked-about topic, because it was just non-stop Justin Bieber. I remember when retweets graduated from a copy-and-paste exercise to a button you could click. I remember when they decided to allow posting photos, instead of just text, on the platform. I remember when they started notifying us when someone favourited a post (so many notifications seemed excessive at the time). Good times.

Today, the times are not so good over at Twitter/X. So I've decided to delete all my posts there, but not before downloading an archive of my posts and hosting them on my own website. Here's how I did it.

Step 1: download your archive

Go to x.com/settings/your_twitter_data to download an archive of all your data. It might take 24 hours or so for the archive to be prepared, and they'll send you an email reminder when it's ready.

This archive is essentially a web app that you can open in your browser, although the data and photos are all stored locally. We'll be modifying the archive to make it safe for public consumption, meaning we will have to strip out a lot of personally-identifying information. If you want to maintain the full version of your archive including all of Twitter/X's tracking and analytics data about you, save another copy of the archive somewhere.

You can view your archive by opening Your archive.html in a browser. It's a read-only version of the Twitter timeline where you can navigate your tweets, retweets, likes, and more.

Step 2: remove ad-related data

In the archive, there's a folder called data/. There's a lot of personally-identifying information in here that I don't want to publish, so let's delete all these files:

  • ad-engagements.js
  • ad-impressions.js
  • ad-mobile-conversions-attributed.js
  • ad-mobile-conversions-unattributed.js
  • ad-online-conversions-attributed.js
  • ad-online-conversions-unattributed.js
  • ads-revenue-sharing.js
  • personalization.js

Step 3: remove data related to blocked and muted users

It's probably not a good idea to publish a list of people who you've blocked or muted. Let's remove these files from the data/ folder as well:

  • block.js
  • mute.js
  • smartblock.js
  • periscope-ban-information.js

Step 4: remove private information

There is all kinds of data in the archive that is meant to be private, including identifiers like your email address (including previous email addresses you have used to access the account), IP address, connected apps, direct messages, saved searches, and Grok chats. Let's delete these files from the data/ folder:

  • account-timezone.js
  • account-suspension.js
  • account-label.js
  • account-creation-ip.js
  • ageinfo.js
  • audio-video-calls-in-dm-recipient-sessions.js
  • audio-video-calls-in-dm.js
  • branch-links.js
  • community-note-rating.js
  • community_tweet_media (folder)
  • community-tweet.js
  • connected-application.js
  • contact.js
  • deleted-note-tweet.js
  • deleted-tweet-headers.js
  • deleted_tweets_media (folder)
  • deleted-tweets.js
  • device-token.js
  • direct_messages_group_media (folder)
  • direct_messages_media (folder)
  • direct-message-group-headers.js
  • direct-message-headers.js
  • direct-message-mute.js
  • direct-messages-group.js
  • direct-messages.js
  • email-address-change.js
  • grok-chat-item.js
  • ip-audit.js
  • key-registry.js
  • ni-devices.js
  • phone-number.js
  • protected-history.js
  • reply-prompt.js
  • saved-search.js
  • screen-name-change.js
  • shopify-account.js
  • sso.js
  • tweetdeck.js
  • user-link-clicks.js
  • verified-organization.js

Step 5: clean up the manifest file

The data/manifest.js file contains references to a lot of the data that we just deleted. Let's clean up this file by removing references to any of the sensitive data that we dealt with above. Under dataTypes, these are the only items I have kept:

  • account
  • app
  • article
  • articleMetadata
  • catalogItem
  • commerceCatalog
  • communityNote
  • communityNoteRating
  • communityNoteTombstone
  • follower
  • following
  • like
  • listsCreated
  • listsMember
  • listsSubscribed
  • moment
  • momentsMedia
  • momentsTweetsMedia
  • noteTweet
  • periscopeAccountInformation
  • periscopeBroadcastMetadata
  • periscopeCommentsMadeByUser
  • periscopeExpiredBroadcasts
  • periscopeFollowers
  • periscopeProfileDescription
  • productDrop
  • productSet
  • professionalData
  • profile
  • profileMedia
  • shopModule
  • spacesMetadata
  • tweetHeaders
  • tweets
  • tweetsMedia
  • twitterShop
  • verified

After modifying the dataTypes items, try to visit the archive in your browser to ensure the code is still formatted properly and nothing appears broken.

Step 6: censor personal information in data/account.js

The data/account.js file is necessary for the archive to function properly, but it also contains some information that I don't want to publish. This includes my email address and the method used to create the account. I simply replaced these values with "No data".

You should keep your accountId intact, as it is required for some parts of the archive to display properly.

The full data/account.js file now looks like this:

window.YTD.account.part0 = [
  {
    "account" : {
      "email" : "No data",
      "createdVia" : "No data",
      "username" : "samnabi",
      "accountId" : "19156606",
      "createdAt" : "2009-01-18T20:28:08.000Z",
      "accountDisplayName" : "Sam Nabi – @samnabi@spore.social"
    }
  }
]

Step 7: tweak the archive homepage

This step is optional, I suppose, but if you're publishing your archive for other people to read, it's helpful to give them a bit of an introduction that is different from the message that Twitter/X gives to people who have just downloaded their own archive privately.

  1. I removed the file home-image.png, because I found it unnecessary
  2. In Your archive.html I changed the <title> tag from "Your Twitter Data" to "Sam Nabi's Twitter Archive"
  3. Using CSS, I hid some of the boilerplate text and the menu items for data had already been deleted. I also added a basic introduction by injecting it into a pseudo-element using the content property. I did this through CSS instead of by modifying the content directly, because the archive is already compiled as a JavaScript app and I didn't want to poke around inside minified JS code to change things.

This is the CSS I added to the <style> block near the top of Your archive.html:

      /* Hide inactive menu items */
      a[role="menuitem"][href*="account"],
      a[role="menuitem"][href*="account"],
      a[role="menuitem"][href*="safety"],
      a[role="menuitem"][href*="personalization"],
      a[role="menuitem"][href*="ads"],
      a[role="menuitem"][href*="lists"],
      a[role="menuitem"][href*="messages"] {
        display: none;
      }

      /* Intro title  */
      .css-901oao.r-hkyrab.r-1qd0xha.r-1b6yd1w.r-1vr29t4.r-ad9z0x.r-1yflyrw.r-bcqeeo.r-qvutc0 .css-901oao.css-16my406.r-1qd0xha.r-ad9z0x.r-bcqeeo.r-qvutc0 {
        font-size: 0;
      }
      .css-901oao.r-hkyrab.r-1qd0xha.r-1b6yd1w.r-1vr29t4.r-ad9z0x.r-1yflyrw.r-bcqeeo.r-qvutc0 .css-901oao.css-16my406.r-1qd0xha.r-ad9z0x.r-bcqeeo.r-qvutc0:after {
        font-size: 1rem;
        content: '@samnabi on Twitter (2009-2024)';
      }

      /* Sidebar terms & conditions; intro text */
      .css-1dbjc4n.r-1jgb5lz.r-1ye8kvj.r-1qfoi16.r-tvv088.r-13qz1uu, 
      .css-1dbjc4n.r-1mf7evn.r-tvv088.r-1qewag5 .css-901oao.r-hkyrab.r-1qd0xha.r-n6v787.r-16dba41.r-ad9z0x.r-bcqeeo.r-qvutc0,
      .css-1dbjc4n.r-1j2wfwj .css-1dbjc4n.r-1mi0q7o.r-1j3t67a .css-901oao.r-hkyrab.r-1qd0xha.r-a023e6.r-16dba41.r-ad9z0x.r-bcqeeo.r-qvutc0,
      .css-1dbjc4n.r-1mi0q7o.r-1j3t67a .css-901oao.r-hkyrab.r-1qd0xha.r-a023e6.r-16dba41.r-ad9z0x.r-bcqeeo.r-qvutc0 .css-901oao.css-16my406.r-1qd0xha.r-ad9z0x.r-bcqeeo.r-qvutc0
       {
        display: none;
      }
      .css-1dbjc4n.r-1u4rsef.r-18u37iz.r-1x0uki6.r-15bsvpr.r-13qz1uu .css-1dbjc4n.r-1mf7evn.r-tvv088.r-1qewag5:after {
        content: 'This is a read-only archive of my Tweets. In 2022, I switched to Mastodon as my microblogging community of choice, then removed all my Twitter posts in 2024. These days, you can follow me at @samnabi@spore.social.';
      }

Step 8: redirect image links

For some reason, if you click on an image in the archive, it will open a new tab and take you back to twitter.com. Since we want this archive to exist independently of a decaying platform, I created this script to intercept image clicks and just show you the actual full image instead.

Place this entire <script> block just before the closing </body> tag in Your archive.html.

  <script>
    // On image click, show the local image instead of redirecting to Twitter
    document.addEventListener('click', (event) => {
      if (event.target.classList.contains('Tweet-photo') || event.target.classList.contains('Tweet-photoGalleryPhoto')) {

        // Prevent navigating to the twitter.com URL in image links
        event.preventDefault();

        // Instead, open the image file in a new tab
        window.open(event.target.src, '_blank').focus();
      }
    });
  </script>

Step 9: replace t.co links

All links shared on Twitter/X get shortened using their t.co service. This means that the links in your archive will still depend on Twitter/X's infrastructure to function! Instead, let's find and replace all t.co links with their expanded link.

We can find the expanded link by literally visiting a t.co link and seeing where it ends up. For a small number of links you may be able to do this manually using a find/replace tool in your text editor. For my 19.4k posts, though, I needed an automated solution. Here's a PHP script I used to replace all the t.co links.

Even though I used an automated script here, it took a very very long time to complete. This code is not optimized for speed!

<?php
/**
 * Determines the destination URL of a provided URL that gives a redirect.
 *
 * @param string $url the specified URL
 * @return string $url the destination of the specified URL.
 */
 function getDestinationUrl($url) {
  echo $url;
  $ch = curl_init();
  curl_setopt($ch, CURLOPT_URL, $url);
  curl_setopt($ch, CURLOPT_HEADER, true);
  curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
  curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
  curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); // 5-second timeout
  curl_setopt($ch, CURLOPT_TIMEOUT, 5); // 5-second timeout
  $headers = curl_exec($ch);
  $url = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL);
  if (strstr($url, '//') !== false) {
    echo " => ".$url."\n";
    return $url;
  } else {
    echo " => null";
    return '';
  }
}

// Define the data files from the twitter archive
$data_files = [
  './data/like.js',
  './data/moment.js',
  './data/tweets.js',
  './data/profile.js'
];

foreach ($data_files as $file) {

  // Load contents of the file into a string
  $file_contents = file_get_contents($file);

  // Find all t.co links and replace them with their actual link
  // This is incredibly slow and not optimized, but I'm only doing this once so whatever :)
  if (preg_match_all('/https?:\/\/t\.co\/[\d\w]+/', $file_contents, $matches)) {
    foreach ($matches[0] as $link) {
      $file_contents = str_replace($link, getDestinationUrl($link), $file_contents);
    }

    // Overwrite the data file with the new replaced contents
    file_put_contents($file, $file_contents);
  }  
}

All done!

Phew, that was a lot of fussing about! Now you should have an archive that you can upload to a public server, without the need to rely on any infrastructure managed by Twitter/X.

You can take a look at mine at samnabi.com/twitter.

After verifying that your archive is looking good, you can go ahead and delete your Tweets using a tool like TweetDelete.

]]>
<![CDATA[Send email from your Ubuntu LAMP server (the easy way)]]> https://samnabi.com/blog/send-email-from-your-web-server-the-easy-way https://samnabi.com/blog/send-email-from-your-web-server-the-easy-way Wed, 01 Feb 2017 00:00:00 +0000 The humble PHP mail() function is a handy friend to have. Whether sending yourself debugging messages from a test server, or implementing a quick-and-dirty contact form, I've always been able to rely on sending quick email messages from the server.

At least, this is the normal experience on a shared LAMP server. Email gets sent, more or less like magic, and I don't have to worry too much about the fine details behind the scenes. But now that I've upgraded to my own VPS, things can get a little hairy in email-land.

Don't host your own mail server

Email may be prehistoric technology that predates even the internet, but that doesn't mean it's simple to implement. The last thing you want is to manage a full-blown email server. Trust me, leave that to the pros.

Still, you want your PHP scripts to be able to send one-way messages. So you need to hook your server up to an existing SMTP mail service. I'll be using Postmark to handle that for me. They're good people — they'll make sure your emails don't get marked as spam and all that good stuff.

Uninstall sendmail and postfix packages

My VPS is running Ubuntu 16.04 and Apache 2.4. The most common email packages out there are sendmail and postfix. But we're not going to use either of them, because we don't need a complete email server that receives messages and has mailboxes and everything. All we want to do is send messages.

Let's stop those services from running and uninstall them, then:

$ service sendmail stop
$ service postfix stop
$ apt-get remove sendmail
$ apt-get remove postfix

Install and configure Simple SMTP (sSMTP)

Simple SMTP (sSMTP) is a package that does just what it says on the tin. Let's install it.

$ apt-get install ssmtp

Next, edit the sSMTP configuration file.

$ nano /etc/ssmtp/ssmtp.conf

I needed the following SMTP details to configure my server:

  • Hostname (e.g. smtp.postmarkapp.com)
  • Port (e.g. 25)
  • Username
  • Password

Here's what my ssmtp.conf file looks like (I've set sam@samnabi.com as my sender signature in Postmark):

root=sam@samnabi.com
mailhub=smtp.postmarkapp.com:25
AuthUser=MY-POSTMARK-SERVER-API-TOKEN
AuthPass=MY-POSTMARK-SERVER-API-TOKEN
hostname=samnabi.com

If you see any lines that say rewriteDomain or FromLineOverride, you can comment those out.

Next up, let's edit sSMTP's list of aliases:

$ nano /etc/ssmtp/revaliases

This file lists which apache users are allowed to send mail through sSMTP. My PHP applications are run under the www-data user, so I want to enable that user, plus root:

root:sam@samnabi.com:smtp.postmarkapp.com:25
www-data:sam@samnabi.com:smtp.postmarkapp.com:25

Good to go!

Go ahead and reboot that server, your PHP mail() functions should be working now!

If you need to debug anything, check your Apache logs at /var/log/mail.log and /var/log/mail.err.

I wrote this blog post because everything else I found on the internet ended up in a dark spiral of cryptic forum posts and listserv archives. If anything's unclear here, leave a comment! I'll do my best to clear up any confusing parts of the process.

]]>
<![CDATA[Jetpack: Only show related posts from the same category]]> https://samnabi.com/blog/jetpack-only-show-related-posts-from-the-same-category https://samnabi.com/blog/jetpack-only-show-related-posts-from-the-same-category Thu, 11 Aug 2016 00:00:00 +0000 The Jetpack plugin is a must-have for almost any WordPress website. It may be bloated with all kinds of crap you don't need, but it always has one or two features you absolutely do need. And it tends to do those things quite well (security protection, stats, social media sharing). It's no wonder that major theme developers are relying on Jetpack instead of reinventing the wheel.

So I thought it was really weird when I couldn't find any information about what I thought would be a common problem. Jetpack's Related Posts finds relevant content suggestions at the bottom of a blog post. I wanted to make sure those suggestions are coming from the same category as the current post.

Searches on Google? Nothing. Stack overflow? Nada. WordPress forums? Zilch. Jetpack's own website recommended taking a look at the source code, which left me scratching my head.

At long last, I emailed the Jetpack support team and they provided me with some code to use. It was half-complete, but gave me enough new keywords to look up that I came across this solution by Brandon Kraft:

// This function will only return posts that are related AND has ALL of the same categories.
function jp_only_rp_in_same_category( $categories, $post_id ) {
  $category_objects = get_the_category( $post_id );
  if ( ! empty( $categories ) ) {
      $categories = array_merge( 'categories', 'category_objects' );
      return $categories;
  }
  else {
    return $category_objects;
  }
}
add_filter( 'jetpack_relatedposts_filter_has_terms', 'jp_only_rp_in_same_category', 10, 2 );

Copy and paste that into your functions.php or wrap it in a simple plugin, and you're good to go.

]]>
<![CDATA[Bridging the web-native gap]]> https://samnabi.com/blog/bridging-the-web-native-gap https://samnabi.com/blog/bridging-the-web-native-gap Sun, 20 Sep 2015 00:00:00 +0000 The line between websites and apps is becoming blurrier every day. What with entire operating systems being made with HTML5, and the recent influx of native adblockers on mobile platforms, there's all kinds of cross-pollination that is absolutely good for the industry and good for the web in general.

Despite this, there's still quite a gap between native apps and the web. One glaring example that Twitter users have had to deal with for nearly three years: Instagram's decision to remove photo previews from tweets. This political decision has caused a truly horrible user experience for all of us.

Yeah, nobody's going to click that Instagram link. Not only is the preview missing, but this link will open up in your phone's browser instead of the Instagram app. Even though you may be logged in to the Instagram app on your phone, you'll likely have to log in again through the browser.

So let's recap. If you want to fav an Instagram photo you found on Twitter, just follow these simple steps:

  1. Click an obscure photo link that has no preview
  2. Get directed to your browser, which is probably a second-rate experience compared to the native app you already have
  3. Log in with the browser, even though you're already logged in on your app
  4. Get redirected to your home page feed. Now you've lost the photo you wanted to see.
  5. Switch back to the Twitter app
  6. Follow the same link you already clicked in step 1
  7. Double-tap to fav

Here's our problem: the browser doesn't know your apps exist. Your apps don't know that other apps exist. Your default browser gobbles up every link that comes across its path and the experience is terrible.

A better vision

This week, I've been musing about what a standards-based, backwards-compatible way to bridge the web-native gap. The more I thought about it, the more simple the solution seemed. We have the necessary tools at our disposal, but nobody seems to be using them to solve this problem.

There some truly mind-boggling proprietary URL schemes and workarounds out there. Relying on a new URL format or third-party app are not long-term solutions. These over-complicate the problem and drive a wedge further between the web and native.

What if all links to facebook.com were opened by your Facebook app by default? Any link to twitter.com would get intercepted by your Twitter client instead of opening in the browser. Same deal with Instagram.

This concept could be expended further. Let's say you're browsing Project Gutenberg to find a sweet ebook to read. You have an ereader app that listens for .epub links, and adds them to your library with one click. This saves you the steps of downloading the file, opening the app, and importing it manually.

Sounds nice. How do we get there?

The chokepoint here is the operating system itself. Right now, you can download alternative browsers and set them as your default. We need OSes to mash-up this functionality with a little regex and let any app be the "default browser" for certain kinds of links.

This preserves backwards-compatibility by opening links in the default browser if the user doesn't have the app, or if they're browsing on some other kind of device that doesn't have apps. It's the best of both worlds.

I'm not sure what the next step is: petition the smartphone OS makers? Build support in the developer community? Rally users on a per-app basis?

This solution really has benefits for all three groups of people. OS makers can save the time and energy they're wasting on duplication of the HTTP protocol -- no need to reinvent the wheel. Developers can offer deep integration with the web while offering first-rate native user experience. Users don't have to feel shuffled around like sheep and get a smoother user experience.

Let's talk about this. What do you think?

Update (1 Nov 2015): It looks like the newest versions of iOS and Android will support app-to-app links, also called deep linking. This is a good step forward, but unfortunately could also mean a step back. Both OSes will require apps to prove they own the URL, thereby crippling functionality for third-party clients and limiting the creative potential of the native mobile experience.

]]>
<![CDATA[Losing the internet]]> https://samnabi.com/blog/losing-the-internet https://samnabi.com/blog/losing-the-internet Fri, 29 Aug 2014 15:00:00 +0000 When I was fifteen years old, I came across a web design forum called Open Source Web Design (OSWD) and fell in love with the community there. Members would create XHTML/CSS templates that anyone could use for free. We gave feedback on each others' designs, chatted about trends in the industry, and the more experienced folk were more than happy to show the ropes to a young grasshopper like me.

There were never that many people in the OSWD community -- maybe 100 active users at the most. Logging into the forum felt like entering a cosy clubhouse, one that was small enough to get to know everyone. Not that it was an exclusive club; it's just that not many people knew about it.

OSWD was headed up by a man named Frank Skettino. Beyond that, I didn't know anything about him. But that was fine; he was the site's benevolent dictator and nobody really gave it much thought -- until he stopped approving new submissions. Shortly thereafter, the forums were removed. The site has been frozen in amber since.

Members of the community tried to find a new home, but it was never the same. This thread on GetFreeWebDesigns.com shows a few of the "old guard" trying to make sense of just what's been going on (I'm acousticsam). But by that time, the community had begun to dissipate.

They say the internet never forgets, but really it's so easy for years of content to be erased in the blink of an eye. At the end of the day, a website's users are at the mercy of whoever holds the keys.

The web loses chunks of its history every day. The masses of small- and medium-sized web services that are being snapped up by the likes of Google, Apple, Facebook, & Co. follow a well-worn pattern: build up a loyal following, sell out to a major corporation, then delete all your users' data. When Yahoo bought Geocities, then subsequently shut it down, millions of webpages died.

This past week, the phenomenon hit closer to home. WonderfulWaterloo.com, a forum for urban issues and development news in Waterloo Region, has been "temporarily" shut down, after users started voicing their discomfort with the way the site was being managed. I feel like this is going to be OSWD all over again.

Once a site is down, there's not a whole lot you can do to recover the content. I've set up a central repository with some last-ditch methods people can use to recover what they can from the ether. But it's like setting a library on fire and then trying to rescue the books. We'll never be able to get it all back.

Curiously, I received an email today from the owner of WonderfulWaterloo, who had evidently caught wind of my rescue attempt:

Please remove the images/files from your recovery page: https://github.com/samnabi/wonderful-waterloo-recovery. You don’t have their licences.

I spent hundreds of hours taking photos such as this, they aren’t for you to post publicly on any other websites: https://github.com/samnabi/wonderful-waterloo-recovery/blob/mas...2b%20R.jpg

I believe that people have a right to the content they've created. That includes the right to delete it from public record, and if I can ascertain the veracity of the owner's claims, I'll happily erase his photos from the repository.

The irony here is that when site owners pull the rug out from under a community, it's a huge betrayal of trust. OSWD could never get back on its feet after the first unannounced shutdown, and it will be difficult for the WonderfulWaterloo community. Years of photos documenting the region's growth, discussions, and debates about important community issues have been wiped clean.

Even if the site comes back online, will users trust WonderfulWaterloo with their data anymore?

]]>
<![CDATA[Generational aptitude]]> https://samnabi.com/blog/generational-aptitude https://samnabi.com/blog/generational-aptitude Tue, 24 Dec 2013 01:30:00 +0000 There are few things quite as irksome as watching someone type "www.google.com" when they can just search from the address bar. It's like counting out a hundred pennies to pay for a pack of gum. Why would you do that?! My internal monologue screams.

My grandparents and parents approach technology in a fundamentally different way than I do. When personal computing became mainstream, they had already reached adulthood. In their formative years, computers were something that very smart scientists and engineers built for large institutions. Knowing how to operate a computer was closer to rocket science than to auto repair, for example -- and to some extent, I think this mentality still sticks with them today.

Anytime my father asks me how to do some new tech-related task, I write out step-by-step instructions on a piece of paper. My grandparents are the same way. They need those instructions written down like a recipe because they want an authoritative source of information.

If for some reason my instructions don't anticipate every scenario -- a dialog box that I failed to account for, or a software update that changes the layout of a page -- they are reluctant to experiment. They're not likely to google the problem, nor are they keen to click around and see what happens. Fear of pushing the wrong button prevents them from taking a guess.

In a way, this generational gap in computer literacy is similar to learning a new language. Immerse a child in Russian, and they'll absorb the language as they learn and grow. Teach Russian to an adult as a second language, and their learning is skewed by the structure of their native tongue.

I'm not saying that everyone over 40 is a luddite. My dad actually adopts new technology faster than I do -- he's really excited about his new seven-inch phone-tablet hybrid while I hold onto my QWERTY Blackberry for dear life. My grandparents use software to edit photos and map out our family's genealogy, and they subscribe to a PC power user magazine to keep themselves up-to-date on the latest trends in tech.

As I think about the generational differences in the way we approach technology, I realise that in some cases, the aptitude gap goes the other way. My 78-year-old grandfather has no trouble driving with manual transmission, but I wouldn't know where to start. He also has a fascinating low-tech solution to encrypt the PIN codes for his various payment cards. It's basically a cipher that he keeps in his wallet on a piece of paper the size of a business card. I would never have thought to secure my data in this way, but it works -- and it's far from the prying eyes of the NSA.

Indeed, I have my own mental ruts and preconceived expectations when dealing with new technology. I still hunt for a save button when working in Google Docs, and I'm sure that other innovations will continue to trip me up down the road. I may have come of age at the same time as the Web, but it's evolving faster that I am. How long will it take for me to feel like I'm really out of my element?

]]>
<![CDATA[Serving up responsive background images on the fly]]> https://samnabi.com/blog/serving-up-responsive-background-images-on-the-fly https://samnabi.com/blog/serving-up-responsive-background-images-on-the-fly Mon, 23 Sep 2013 00:00:00 +0000 Over the past 6 months or so, I've been redesigning my website. The design itself is pretty much final, but it's an iterative process and there are always improvements to be made. One of my goals for the redesign was to cut down on bloat – both in terms of the CMS structure and users' load time.

On the CMS side, I ditched WordPress in favour of Kirby. Its core files weigh in at 136 KB zipped, compared to WordPress's 4.3 MB. Plus, Kirby uses a flat-file data structure that's dead simple to install, backup, and modify.

But what I really want to get at with this post is my technique for serving up responsive images. As you can see on the homepage, there's quite a large photo in the header, and I wanted to make sure that visitors get an appropriately-sized image for their device (nobody on a smartphone wants to download a 4000-pixel-wide behemoth).

I don't claim to be an expert on this subject, and responsive image techniques have been tackled by a lot of people who are much smarter than me over the past few years. But this is a solution that's easy to implement and gives appreciable results.

It's important to note that the images I'm working with here are background images – so why don't I just use media queries to serve up a responsive image? Well, because I have over a hundred of these images, and I plan to add more. They're mostly travel photos, and I don't want to manually create 3 or 4 different versions of each image. Your use case may require a different solution.

The code

Now that the caveats are out of the way, let's get to it. I used the excellent TimThumb to do most of the grunt work. This method redirects requests for images through to a PHP script that will fetch the image, and scale and crop it accordingly. The implementation is simple:

// Load original image
<img src="/path/to/image.jpg" />

// Load resized & cropped image
<img src="/path/to/timthumb.php?src=/path/to/image.jpg&w=500&h=100&zc=1&q=75" />

timthumb.php accepts five variables:

  • src: absolute path to the image
  • w: width in pixels
  • h: height in pixels
  • zc: zoomcrop – possible values are 0 (no crop) and 1 (crop). (delault is 1)
  • q: image quality, out of 100 (default is 90)

My header area takes up the full screen width, so that should be the width of the image, too. As for the height, that depends on the size of the window because this is a responsive design. I'll have to wait until my logo loads to measure the height of the header with javascript.

Here's what it looks like. This script is placed right after the closing </header> tag:

<script>
    document.getElementById('logo').onload = function(){ // wait until #logo is finished loading
        var height = document.getElementById('header').offsetHeight; // measure the height of #header
        var width = window.outerWidth; // measure the width of the window
        document.getElementById("header").style.backgroundImage = "url(/path/to/timthumb.php?src=/path/to/image.jpg&h="+height+"&w="+width+")"; // set a background image to #header, using the height and width calculated above
    }
</script>

Of course, putting a plain CSS background-image style inside a <noscript> tag will ensure graceful degradation for those without javascript enabled. Just manually set the width and height to something middle-of-the-road.

The results

So what does this mean for performance? Let's take a look at the results of two image downloads: the first is a full-size, 261 KB image of the Bastei bridge in Germany. The second is the same image, but resized to 1280px * 206px (this is the appropriate size for a header image on my 13" laptop screen).

OriginalResized
File size261 KB71 KB
Latency0.493 s1.190 s
Total download time10.78 s6.96 s
Calculated download speed24.2 kbps10.2 kbps

I'm currently on a weak mobile internet connection, which is why these download speeds are so slow. But the main point here is that even though my connection speed was slower when I downloaded the resized image, it still completed the download faster. If the connection speeds had both been 24.2 kbps, the resized image would have downloaded in 2.93 seconds.

As you can see, the latency is higher when we resize an image, because TimThumb needs to do some server-side processing before sending data back to the browser. But the resulting file downloads more quickly, more than offsetting the initial delay.

That's all there is to it! This solution is working for my header images at the moment, but the next step is to make all the images on this site responsive. Perhaps my best bet will be to extend this method; maybe I'll need a parallel system; or maybe I'll start from scratch with a comprehensive solution. Who knows? For now, I'm happy with yet another speed improvement.

]]>
<![CDATA[Goodbye, Reeder. Hello, MnmlRdr + Fluid.]]> https://samnabi.com/blog/goodbye-reeder-hello-mnmlrdr-fluid https://samnabi.com/blog/goodbye-reeder-hello-mnmlrdr-fluid Thu, 08 Aug 2013 00:00:00 +0000 Google Reader is no more, which has opened up the feed reader app scene to new ideas and business models. With the plethora of new feed readers out there, I won't get into a comparison of the different products on offer. After all, it has only been a month since Google Reader was shuttered. I'm sure the landscape will shift significantly over the next few months as smaller companies and independent developers vie for a slice of Google Reader's former userbase.

I like my feed reader to behave like an email client – running in the background, collecting articles so I can read when it's convenient for me. Up until recently, I had been using Reeder, but the app's author has neglected to update it to work without Google and frankly, I'm tired of waiting.

While searching for a Reeder replacement, I soon noticed two things: one, most of the new crop of feed readers are web-based. Two, the Mac apps on offer typically don't support Snow Leopard.

I ended up settling in with MnmlRdr, a no-nonsense web-based feed reader being developed by Jordan Sherer. By eliminating superfluous features like social sharing, MnmlRdr has carved out a nice niche for itself in the flurry of Google Reader alternatives to come out in the last month. Its responsive design feels comfortable on my old Blackberry for on-the-go reading, which is a major perk. I've emailed the developer a couple times with bugs and feature suggestions, and he's always quick to respond.

Since MnmlRdr is in a private preview right now, Here's a quick screencast to show what the user experience is like:

You'll notice that I have MnmlRdr running as a separate app on my computer. I used Fluid to make that happen, and it works beautifully. After a couple tweaks, MnmlRdr feels comfortably at home on my desktop.

Tweak #1: the icon

MnmlRdr's boxy logo looks great on the website, but it's a little imposing when nestled among my other dock items. So, I created an alternative logo.

To use this icon for the Fluid app icon, save the image and go to Preferences > General. Click the Change... button next to Application icon to update the icon.

Here you go:

Tweak #2: the badge

If you purchase the paid version of Fluid, you can add userscripts to your apps. this lets you do neat things like have a badge showing the number of unread articles. Here's how.

Go to Window > Userscripts, and click the plus sign in the bottom left corner. Change the pattern from *example.com* to *mnmlrdr*.

In the script field, copy and paste the following code. This will check the page's title bar every 5 seconds to see what the unread count is, then display it as a badge on the app icon.

window.fluid.dockBadge = '';
function updateDockBadge() {
    var title = document.title;
    var regex = /\((\d+)\)\s/;
    var res = title.match(regex);
    if (res.length > 1) {
        var newBadge = res[1];
        window.fluid.dockBadge = newBadge;
    }
    else {
        regex = /MnmlRdr/;
        if(regex.test(title)){
            window.fluid.dockBadge = '';
        }
    }
}
setInterval(updateDockBadge, 5000);

So, that's my setup. If you subscribe to RSS feeds, how have you adapted to a post-Google Reader world?

]]>
<![CDATA[You don't need another "read-it-later" app.]]> https://samnabi.com/blog/you-dont-need-another-read-it-later-app https://samnabi.com/blog/you-dont-need-another-read-it-later-app Sat, 18 May 2013 01:47:00 +0000 If you're anything like me, you know that keeping on top of news and finding interesting things to read online can become a chore. I find that I often build up a glut of bookmarks, get overwhelmed by it all, and promptly turn to Canvas Rider to relieve the pressure. Before I know it, I've wasted two hours on a mindless game and feel even worse about not getting around to the articles that I had wanted to read.

A while ago I started fiddling around with a concept for a unified "favourites feed" that would capture all of my favourited tweets, starred Google Reader items, and saved Reddit posts. (Unfortunately, with Google Reader going the way of the dodo, I'm not going to waste time trying to integrate it now.)

This is different from services like Pocket or Instapaper, because it's not another app that I have to worry about. It's just a convenient homepage that brings all my favourite posts under one roof. I can keep favouriting Tweets and saving Reddit posts as normal, and they'll show up like magic.

With so much information overload these days, it's easy to drown in the firehose. Pulling all my favourites together helps me keep an eye on what's important, and I hope it's useful for you too.

Check it out here: http://favs.samnabi.com

Right now, it only supports Twitter and Reddit. Let me know what other web services you're interested in!

]]>
<![CDATA[Tweet from the address bar in Chrome and Firefox]]> https://samnabi.com/blog/tweet-from-chromes-address-bar https://samnabi.com/blog/tweet-from-chromes-address-bar Mon, 25 Mar 2013 19:44:00 +0000 Update (8 March 2014): The original title of this post was "Tweet from Chrome's address bar". I've updated it to include Firefox, which supports the same functionality.

I love keyboard shortcuts. There's something satisfying about shaving off precious seconds from routine tasks, even if they are as trivial as sending a tweet.

Chrome

Chrome has had the ability to add custom search engines for a long time - but I recently realised that you're not limited to search engines, per se. Any URL that accepts variables will do. Take, for example, the URL to send a tweet. Here's how to set it up.

  • Copy and paste chrome://settings/searchEngines into your address bar, and hit enter.
  • Scroll to the bottom of the window and add a new search engine:
    • Put whatever you want for the search engine name, I wrote 'Tweet'.
    • The keyword is up to you too, but I put 't' because it's nice and short.
    • For the URL, enter https://twitter.com/intent/tweet?text=%s - the %s is important!
  • Click "Done". Your search engine should look something like this:

Now, you can type "t <space>" to start composing a tweet inside your address bar. Hit enter to continue, or Cmd-Enter (Ctrl-Enter for Windows) to continue in a new tab. You'll see this page if you're logged in:

Firefox

With Firefox, you can set up this same behaviour by adding https://twitter.com/intent/tweet?text=%s as a new bookmark. Just like in Chrome, set a keyword like 't' to precede your tweet.

Unlike Chrome, Firefox doesn't include the word "Search" before the preview label.

The cool thing is that Twitter will shorten any URLs you include before you hit "Tweet", so you can still make the most of that sweet sweet character limit.

]]>
<![CDATA[Line length and readability: speed vs. user experience]]> https://samnabi.com/blog/line-length-and-readability-speed-vs-user-experience https://samnabi.com/blog/line-length-and-readability-speed-vs-user-experience Thu, 27 Dec 2012 03:17:00 +0000 There is no shortage of opinions about the optimal line length for content on the web, especially in today's world of varying screen sizes and fluid layouts. However, many of these articles tend to focus on the speed and efficiency of reading rather than on users' perceptions. In my opinion, the user experience is much more important than actual reading speed. I don't care how long it takes someone to read an article; I just want them to enjoy their time on my site.

With that in mind, all the research I've found concludes that readers prefer reading content with fewer characters per line (cpl), no matter how they perform objectively in terms of speed.

Dyson and Kipping (1997) compared a single-column layout with a line length of 100cpl to a 3-column layout with a line length of 30 cpl. They found that while a wide, single column results in faster reading speeds, people prefer reading in multiple narrower columns.

Dyson and Haselgrove (2001) found that a line length of 55 cpl (as opposed to 25 cpl or 100 cpl) "produced the highest level of comprehension and was also read faster than short lines".

Bernard, Fernandez, and Hull (2002) compared line lengths of 45, 76, and 132 cpl. They found that medium-width and narrow line lengths (45-75 cpl) make it easier to concentrate on the text, and that a line width of 76 cpl provides the most desirable layout.

Ling and van Schaik (2006) found no significant differences in reading speed or efficiency for different line lengths (options were 55, 70, 85, or 100 cpl), but participants preferred the 55 cpl line length.

Based on these findings, it seems that the old print industry standard of 45 to 75 cpl is still a useful measurement for content on the web.

This is a good thing to keep in mind when designing websites that cater to a plethora of browsers, screen sizes, and default font settings. The Kindle, for example, eschews the standard 16px default font size. Devices with higher pixel densities like the iPhone cause all kinds of other layout issues.

Going forward, we have to start using em-based layouts. Hardware will only become more fragmented, and em-based layouts ensure that content will look right no matter how it's accessed. Through all this, it's important to keep content at a readable width. The trick is, how will you define "readable" - based on speed, or based on the user experience?

]]>
<![CDATA[The University of Waterloo rolls out an unnecessary redesign]]> https://samnabi.com/blog/the-university-of-waterloo-rolls-out-an-unnecessary-redesign https://samnabi.com/blog/the-university-of-waterloo-rolls-out-an-unnecessary-redesign Fri, 17 Aug 2012 01:50:00 +0000 The University of Waterloo released a new website design today, foregoing its dark, photo-centric layout in favour of a brighter, more content-heavy homepage. It has received mixed reviews; some love the new design, some hate it. But it's best not to dwell too much on the design change, because when it comes to aesthetics you're never going to please everybody.

What we should dwell on, though, is the reason why UW needed to redesign its website at all. There were some legitimate concerns with the old site. It failed to meet certain accessibility requirements set by the province, and it wasn't mobile-friendly. The old site had interactive panels and dropdown menus that just didn't translate well to small screens. Surely, these issues could have been addressed by refining the existing website, rather than throwing the baby out with the bathwater.

Back in May 2010, UW released its Positioning Guide - a set of policies governing what colours, fonts, and emblems should be used in communications and publications put out by the university. The new website does not follow these policies. Rather than using the "Waterloo Yellow" specified in the colour palette (#FECB00 for those interested), the new website uses #FFDD00. It's a slightly lighter shade of yellow, which isn't the end of the world. But if UW isn't following its Positioning Guide on its own homepage, what purpose do the policies have, really? This represents a glaring lack of communication between the staff that set the policies for UW's brand identity and those who implement it.

In December 2009, White Whale Web Services (a firm from California that specialises in web development for higher education institutions) was contracted by the university to redesign the website. Extensive identity branding and public consultation happened over the course of 20 months - meetings with students, staff, and the Web Advisory Committee, mockups and screenshots, revisions, beta tests of the new designs, online polls and feedback forms. The redesign was complete by Fall 2011.

I don't know how much money UW paid White Whale to fly back and forth from California all that time and redesign the website from the ground up, but the effort put into it in 2010 and 2011 certainly dwarfs the two days of consultation that were done before this most recent design was unveiled.

I want to specifically address the issue of accessibility. WCAG (Web Content Accessibility Guidelines) is the industry standard for making sure all people, regardless of ability, can access content on a web page. Back in July 2010, a special meeting of UW's Web Advisory Committee was held, where it was decided that the White Whale redesign would meet the WCAG requirements to Level AA. (PowerPoint file, see slide 9) Now, we're being told that that website did not, in fact, meet the guidelines. If it was a project requirement in the first place, why was the website allowed to go ahead without being WCAG compliant? To boot, this new redesign is apparently only Level A compliant a less stringent, and therefore less accessible, target.

One last point: it appears that all references to White Whale Web Services have been erased from the UW website, and its archives now only go back as far as November 2011. The blog that charted the progress of the White Whale redesign is gone. Most of the Google search results for "white whale uwaterloo" are now dead links. I have no idea why this is the case.

To conclude, I am utterly confused as to why UW felt the need to completely revamp its website and erase any trace of the previous redesign, which was an epic undertaking of nearly two years. This new site doesn't adhere to the university's positioning guide, sets a lower bar for accessibility, and was definitely the wrong way to go about solving the problems of the previous website.

]]>
<![CDATA[Now it's easier than ever to fine-tune your Bandcamp players]]> https://samnabi.com/blog/now-its-easier-than-ever-to-fine-tune-your-bandcamp-players https://samnabi.com/blog/now-its-easier-than-ever-to-fine-tune-your-bandcamp-players Sun, 24 Jun 2012 14:22:00 +0000 Is your player not working? In October 2012, Bandcamp changed the way they handle layout files. If you created a custom layout prior to this time and it's not working, you'll have to make a new one, I'm afraid. Back in 2010, I created a handy little app to generate JSON code for the Bandcamp API. The API lets you customise your bandcamp players, offering detailed options beyond the standard five or six layouts.

Now, I've completely rewritten the app to make it faster and easier to use. You don't need to know a thing about JSON anymore, nor do you need your own server to upload the layout files. It's just point, click, copy, and paste.

Having complete control over the look and feel of your embedded Bandcamp players has never been easier. So head on over and try it out!

http://bandcamp.samnabi.com

And by the way, you can generate players for any album on Bandcamp, including big name artists like Sufjan Stevens and Coeur de Pirate. Awesome, eh?

]]>
<![CDATA[Refining reddit]]> https://samnabi.com/blog/refining-reddit https://samnabi.com/blog/refining-reddit Tue, 17 Apr 2012 01:44:00 +0000 Reddit is a fascinating blend of news aggregator, niche forums, and social network. I went from curious to hooked in a matter of days, and the website quickly climbed to the coveted top spot on the "most visited" list of my browser's start page.

But for a website that I spend so much time on, boy is it ugly.

Reddit Enhancement Suite is a popular browser add-on that offers fine-grained customization of your reddit experience. But my problem was the opposite. I didn't want more knobs and twiddly bits - I wanted to get to the content as quickly as possible, and make posts and comments easy to read and navigate.

I didn't want to get caught up in the karma game. I didn't care about custom banners or flair. I wanted to read, respond, and vote with as few distractions as possible.

So I made a CSS theme for reddit that strips out the extraneous details, makes content king, and facilitates reading. If you're a redditor, please do install it and let me know what you think in the comments. These are a few of the key features:

  • The header area sticks to the top of the page as you scroll, so you have easy access to all your subreddits and your inbox
  • Vote counts are hidden - the post's position on the page is a good enough indicator of its popularity
  • The content area is narrower, so you don't have lines that stretch across the entire screen
  • The softer colour scheme is easier on your eyes

P.S.: Two other browser add-ons have significantly added to my pleasure of using reddit: Hover Zoom and Reddit Hover Text. I suggest you install them if you browse reddit at all; it cuts down significantly ont he amount of clicking you have to do.

tl;dr: I made a minimalist reddit theme. Download it here.

]]>
<![CDATA[Windowpane menus with CSS]]> https://samnabi.com/blog/windowpane-menus-with-css https://samnabi.com/blog/windowpane-menus-with-css Mon, 21 Mar 2011 03:05:00 +0000 I've got a handful of web projects on the go, but none of them are ready for the limelight yet. I wanted to unveil some of them by now, since I feel like I've been posting a lot about politics lately. Instead, I'll show you a neat way to create a menu using a single background image.

It's actually very simple. The <ul> element has a background image. The <li> elements have a transparent background, and a solid border that blocks the background image from showing through. Finally, the <a> element has a semi-transparent png that is removed on hover to make the background image shine through brighter.

No need to splice images in Photoshop - you can achieve the same effect with some super clean code and a little creativity!

Take a look at the demo to see the final product. You can also download the source files (including images).

The HTML structure is as simple as can be:

<ul>
    <li><a href="#" class="selected">Tickets</a></li>
    <li><a href="#">Showtimes</a></li>
    <li><a href="#">Tour Info</a></li>
    <li><a href="#">Merch</a></li>
    <li><a href="#">Contact</a></li>
</ul>

And this is the CSS:

ul {
    display: block;
    float: left;
    background: white url(navbg.jpg) no-repeat;
    list-style: none;
    margin: 0;
    padding: 0;
}
ul li {
    float: left;
    border-left: 10px solid white; /* Blocks the background image from showing through the cracks */
}
ul li a {
    background: transparent url(overlay.png) repeat; /* Semi-transparent PNG lets the background image show through. */
    color: #fff;
    display: block;
    line-height: 20px;
    padding: 0 10px 2px;
    text-decoration: none;
    padding-top: 100px;
}
ul li a:hover, ul li a.selected {
    background: transparent;
}

Admittedly there are limitations with this method. First, the area around the menu must be a solid colour (unless you use a border-image, which doesn't have enough browser support yet). Also, the entire menu needs to be floated, so you might have to do some clearing to make it work with your layout.

As always, if you can make this code better or know someone else who has, please let me know in the comments!

]]>
<![CDATA[3D box effect with CSS]]> https://samnabi.com/blog/3d-box-effect-with-css https://samnabi.com/blog/3d-box-effect-with-css Tue, 25 Jan 2011 09:27:00 +0000 (For the impatient, here's the link to the live demo with source files.)

While we wait (and wait... and wait...) for CSS3 to be implemented across all the major browsers, I thought I'd post my method for creating a 3D box effect using plain old CSS2 and less than 300 bytes of images. Of course, the border-image property will eventually make this method obsolete, but who knows when that will be?

First, a caveat: with this method, you need to specify a fixed width and size the images accordingly. A fluid width version wouldn't be too hard to do, but it would involve some nested divs and more images.

Take a look at the screenshot above. The box is made up of 3 divs (I know, I'm cringing too, but I couldn't find a simpler way of doing it without relying too much on images). Here's the html:

<div class="box-top"> </div>
    <div class="box-content">
        <h1>Content Title</h1>
        <p>Content goes here.</p>
    </div>
<div class="box-bottom"> </div>

These are the only two images you need (source files here):

And now for the CSS to pull it all together:

.box-content {
    width: 180px;
    background-color: #FEFF91;
    margin: 0;
    padding: 1px 10px; /* Top and bottom values cannot be zero */
    border-left: 20px solid #D8D97C;
}
.box-top, .box-bottom {
    height: 20px;
    width: 220px;
    background: #FEFF91 url(box-top.gif) no-repeat;
    margin: 0;
    padding: 0;
}
.box-bottom {
    background-image: url(box-bottom.gif);
}
h1 {
    margin: -15px 0 0 0;
}

Easy! If you want to see a live demo and download the source files, check it out here. It's really easy to change the colours of the images with the paint bucket tool in Photoshop (or even Microsoft Paint, whatever).

]]>
<![CDATA[Generate the JSON code for those new Bandcamp embeddable players with my handy little app.]]> https://samnabi.com/blog/generate-the-json-code-for-those-new-bandcamp-embeddable-players-with-my-handy-little-app https://samnabi.com/blog/generate-the-json-code-for-those-new-bandcamp-embeddable-players-with-my-handy-little-app Tue, 07 Dec 2010 20:44:00 +0000 Edit: There is a new version of this app that's easier than ever to use! Check out the new version.

If you're a tech-savvy musician and haven't heard of Bandcamp, you're missing out. One of their many phenomenal features that was recently released is the ability to have complete pixel-perfect control over the layout of your embedded media players (like the one you can see in the sidebar to the left).

To create the settings for these custom layouts, you need to muck around in JSON (which is not very fun). So I put together a little app that lets you customize your player through a form, and then copy the code that it generates - easy as pie. Check it out:

http://demo.samnabi.com/bcembed

Leave bug reports and feature requests in the comments below. I hope this is useful for some people!

Note: There are still a couple features to add, namely support for text colour and tracklist row height. I'll get to those after I'm done my research report - this app is a result of me procrastinating on that!

]]>
<![CDATA[Highlight the current category for single posts in WordPress]]> https://samnabi.com/blog/highlight-the-current-category-for-single-posts-in-wordpress https://samnabi.com/blog/highlight-the-current-category-for-single-posts-in-wordpress Wed, 03 Nov 2010 22:59:00 +0000 When you browse a category in WordPress, a current-cat class is added to the category's list item in the wp_list_categories menu. This is really useful for styling your menu so readers have a visual cue of where they are in your blog.

But when viewing an individual post, the current-cat class doesn't get generated. To generate it when your visitors are reading a single post, insert the following code in your theme's functions.php file.

// Generate the current-cat class when viewing single posts
class singlePostCurrentCat {
  function wp_list_categories ($text) {
    global $post;
      if (is_singular()) {
        $categories = wp_get_post_categories($post->ID);
        foreach ($categories as $category_id) {
          $category = get_category($category_id);
          $text = preg_replace(
            "/class=\"(.*)\"><a ([^<>]*)>$category->name<\/a>/",
            ' class="$1 current-cat"><a $2>' . $category->name . '</a>',
          $text);
        }
      }
    return $text;
  }
}
add_filter('wp_list_categories', array('singlePostCurrentCat','wp_list_categories'));

(Adapted from Kahi's Highlight Used Categories plugin.)

]]>
<![CDATA[I'm setting up a proper blog.]]> https://samnabi.com/blog/im-setting-up-a-proper-blog https://samnabi.com/blog/im-setting-up-a-proper-blog Fri, 01 May 2009 22:42:00 +0000 Hey hey! I'm trying to get all my social networks in line and playing nice with each other, so I only have to post a blog once and it'll go everywhere. I've got my Twitter, Myspace status and Facebook status all synced together, and I'm trying to do the same for blog posts.

Anyway, bear with me as the next few blog posts I make will probably be me trying to figure this whole situation out.

— Sam

]]>
<![CDATA[Why do I stay up so late?]]> https://samnabi.com/blog/why-do-i-stay-up-so-late https://samnabi.com/blog/why-do-i-stay-up-so-late Thu, 27 Jul 2006 03:08:00 +0000 What value is there in spending countless hours in front of the computer monitor? What outcomes does it have in my subconscious mind? Are there psychological factors at play which tie me into the realm of the Internet? I think I am fascinated with the wealth of information and opportunities for self-expression that the Internet has to offer.

There is so much to explore, and it's so vast that I will never be able to satisfy my desire to learn more. I think that the reason I started getting into web design is that it fulfilled a creative desire for me, but even more so, that I could reach out to a community with the click of a mouse.

I wanted to understand more about the inner workings of the Internet, and I think I chose web design because it lets me get down and dirty with the source, but it's not such an overwhelmingly complicated task that I'd have to devote my entire life to it. I love the creative side of things, and the various forums that I can go to for help, and to help others, builds a sense of community.

I am obsessed with the web browsing experience and what I can do to make mine better and more fulfilling. I recently downloaded Opera and Flock, two browsers that stray off the beaten path that I walk with Firefox and Internet Explorer. I now find myself using IE a lot more, now that I have downloaded the IE7 beta. Its visual experience is superb and it's very easy to use. My previous bias against the browser has softened a bit with the introduction of IE7. Flock, in my opinion, is the best browser out there for teens. The built-in blogging tools and photo uploading tools are amazing. This integration into the browser makes things so much more streamlined, in the goal of optimizing time.

But the more useful features I find, the more time I seem to be spending on my computer. It's hypnotic, really. There is no end. there are no limitations. On the Internet, I can mask my identity, change who I am, play through countless roles, and experience so many different things. It's a wealth of knowledge and interactivity, which at the same time stimulates my imagination and makes me zone out into a state of subconsciousness. Sometimes, I look around, and the moment I tear my gaze from the monitor, everything seems so much more real.

That's the thing about the Internet. It seems interactive, but you're only using 2 of your five senses. I should really spend less time surfing the net, and more time out doing stuff. Stuff that will stimulate both my mind and my body. The Internet is like a black hole, sucking the vast majority of teens into itself via myspace and youtube. It's dangerous. As the saying goes, go out and smell the roses. (Is that really a saying? I thought it was, but now I'm not too sure.)

]]>
?>