A Giffen good is something that people buy more of as prices increase, seemingly working against standard supply-demand theories. They're different from luxury goods, which have similar behavior; luxury goods are assumed to become more desirable as they become more expensive, since they are more status-providing. A Giffen good, however, is purchased more because as the price rises, the consumer can no longer afford to purchase higher-quality goods. Normally, these two effects -- wanting an inferior good less as it goes up in price, and having lower purchasing power as goods become more expensive on average -- combine so that you buy less of something as it becomes more expensive. For a Giffen good, the opposite is the case. Wikipedia, as always, is a good source of information on the subject:
So going back to the topic of the post: US Treasuries. In light of the S&P downgrade of US sovereign debt, investors have done a "flight to quality", where they allocate more money to investments with lower risk. Articles such as this mention how investors have "dumped equities in favor of traditional havens like gold and U.S. Treasury securities" (emphasis added). Yes, because of the credit downgrade, people are flocking to US Treasuries -- the very thing the credit downgrade was about.
While it seems counterintuitive at first, it makes sense when Treasuries are thought of as a Giffen good. In this case, all "prices" are measured in terms of risk. Investors roughly have a certain amount of risk that they are willing to take on, which represents their "budget". Treasuries are a cheap, but low quality, way of spending this risk budget. The credit downgrade increased the risk -- or "price" -- of Treasuries, and suddenly everyone's risk budget has been slashed. Because investors can't afford as much risk, they have to go back to buying the things that are cheap to satisfy their purchasing demands. This explains why, when the S&P says that US Treasuries are not as risk-free, rates actually decreased.
Of course, this all assumes that the credit downgrade is the only thing on investors' minds, and it only directly affects Treasuries.
I'm not sure why, but lately I've been seeing a lot of prepaid credit cards being advertised as gift cards. And why not -- it's like giving a gift card but without constraining where you can spend it. It's great for the credit card companies, since they make money, and good for the gift giver I suppose, since it seems more thoughtful than giving cash.
I'm sure the credit card companies make quite a bit of money off these things. First, there's the transaction fees that they charge on any credit card transaction, that I doubt they waive for products they market as gift cards. Second, there are the hefty maintenance fees that they charge, up to $50 a year. Third, there are activation fees (though thankfully I haven't any evidence of a card that charges both an activation fee and maintenance fees). Fourth, and possibly most lucratively, is that the credit card company gets all the money that you don't spend on the card.
And this is a story about how much work it was to spend almost all the money on the cards. When you think about it, how exactly would you charge $100 on a credit card? Unless you have a lot of time on your hands, the only way to do that is with the cooperation of the merchant. Some brick-and-mortar cashiers are nice enough to ring up your transaction separately. Some online merchants (dell.com, for instance) let you pay with multiple credit cards and specify each one.
Oh wait, did you forget about the one dollar holds that most merchants issue to make sure your credit card is valid? It's ok, I did too. That means that you have to ask the merchant to put $99 on the card. Or you have to get them to put the hold, wait the few days until it disappears, then go back. Too much work for a dollar? The credit card company is counting on you thinking that.
I found what I consider to be a decent solution: I used my prepaid cards to buy myself Amazon gift cards. Amazon gift cards, compared to the prepaid cards, are far more flexible. They are applied automatically to any purchase on amazon.com, can be used partially or to pay for part of an order, and never expire. This is great, since I can just transfer the money to my amazon account and be comfortable knowing that I'll spend it eventually.
Oh, but what billing address did your gift giver sign you up for? Of my three cards, one was easy, since they only had one of my addresses. For one of the other two, I had to try a number of address+phone number combinations until Amazon could get the charge through. For the other one, I had to go to the gift card site and enter my billing information.
In the end, I managed to get all but $2 of the money transferred to Amazon. And spent an hour doing it. Not that I'm not thankful for the money that I received, but if you're considering giving someone one of these prepaid gift cards, do everyone a favor and get them an Amazon gift card instead.
Interesting writeup in favor of brawny cores, by Urs Hölze:
Seems like the big argument is that many operations are latency-oriented, rather than throughput-oriented. This is clearly true for web processing when a user is waiting for a result, but he makes the point that throughput-oriented batch processing is latency-sensitive, and becomes moreso as you add more cores (because you wait until all jobs to finish).
I've been saying for a while that I didn't think any of the "Linked Data" or related initiatives would succeed, simply because they were trying to solve the wrong side of the chicken-and-egg problem. Why would anyone add these tags to their pages if no one would use it? Instead, I said that the only way it could happen (and that indeed it would) is by having search engines push for a specific format. Already, SEO is a huge driving force behind having well-structured web pages. There is huge incentive to make your website easily accessible to search engines, so you structure the page along the lines that the indexer expects. Search engines are the only ones with the clout to change the way that people write web pages, and the only ones with incentives to do so.
Now that search engine companies have put in a lot of time and effort into extracting structured data from webpages, they have built the systems that use that data, and they can finally start pushing for people to make that easier to collect. And we see that today with the introduction of schema.org:
I have a list of tabs that I open every day, which I store in a folder in Chrome. Recently the size of this folder passed some threshold, and now Chrome asks me if I'm sure I want to open so many tabs at once.
Interestingly, the default choice on Windows is "Yes", whereas the default choice in Linux is "No". I assume they did this deliberately, so that makes me wonder what this says about Windows versus Linux programs and users.
I work at Dropbox, where I'm constantly making live changes to the prod backend system. For the protection of our systems and data, and also for my own sanity, I've had to learn some tricks for making sure that things don't go wrong. It's kind of in the some vein as "defensive programming", but different in some key ways: the stakes are much higher, and constraints on availability/usability are looser, and the users have direct access to the source code. It's also similar to techniques such as pair programming -- and at the extreme end, multiply-implementing the same sets of features with different teams -- which require much greater programming time. At Dropbox we have a unique combination of large scale but extremely limited engineering time, which means that I can't rely on anyone right now to review my code, or anyone later to understand it (including me), but yet it has to not mess up, ever.
Thankfully, although we may be extremely busy at Dropbox, none of us are malicious, and with the right set of safeguards in place, most errors can be prevented. The key feature is that reasonable assumptions about the program should lead to reasonable behavior.
One problem is that often the people running the scripts have no idea what the context of the script is, what it requires, or what side-effects it has, so they will make poor assumptions. A second problem is that although it can be easy to think of more safeguards that could be added, adding more can often be counter-productive. This is because if a script is too hard to use, people may just not use it, or worse try to get around the safeguards. So as much as I've thought about safeguards, I've also had to think on their impact on peoples' ability and desire to use the scripts. I'm not saying that I've found anything close to the "best" tradeoff on this point, but here are some of the techniques that I use, roughly in increasing order of being more restrictive:
- Tons and tons of assumption checks. This is pretty straightforward, but a perhaps non-obvious way to make this better is to try hard to check everything at the beginning of the program -- no one wants a script to throw an assertion error after running for a few hours, and then have to figure out how to back out the changes. This can be hard, since it's often hard to tell what the state of the system will be at an arbitrary point in the script's execution, but often bounds can be found for the relevant properties (ie disk usage doesn't grow faster than a certain amount; running out a disk space is a large source of problems). Depending on the script, it may make sense to over-assert; I often use more of a "whitelist" approach, where I assert that the environment is exactly what I imagined, rather than "blacklisting" specific different aspects. Assumptions change all the time, though, and it's impossible to foresee everything that can change, so usually some of the other tricks are needed.
- Help users of the script form reasonable assumptions. A script called "add_data" should not modify anything, and no script should unrecoverably delete data without confirmation. A script should not elevate it's permissions without notification (ex: automatically reading keypairs, calling sudo). This often just means displaying things to the user and requiring confirmation, or telling the user to run it as root.
- Add documentation. People have an extremely limited patience for reading documentation, especially at Dropbox, so the documentation has to be extremely understandable to be useful. I tend to just give an recommended set of parameters for the script, and then more detailed info on the individual options.
- Disabling scripts that become out of date. This is pretty clearly something that needs to be done, but it's often hard to do this because people can just run a previous version of the script. There are a couple ways I've tried to make this more bullet-proof:
- Making the script refuse to run after a certain amount of time. This works pretty well for scripts that don't get run that often, since it will probably have to be updated with each time it's run anyway.
- Make the script check if there's a new version in the repository, and fail if it's not or if it can't check. This is pretty annoying, since our repositories constantly get commits.
- Make the script default to not running. People often assume certain things about the dangerousness of certain scripts, and if a script seems not that bad, they may try running it. Or someone may run it by accident (one of the side effects of using a spell-correcting shell is this happens more than I like). All of our scripts that delete data without asking for permission require a "-f" flag to do anything, or have other checks to make sure the user meant to delete data. Our most dangerous scripts don't do anything unless you first edit the script to remove the line that makes it exit early.
These are just some ways to make sure nothing unintended happens, but don't help ensure that the intended behavior is actually good. Maybe in another blog post I'll talk about some tricks there.
I'm a big fan of email reminder services, since my email inbox is the only thing that I reliably check on a regular basis. It's extremely convenient for me if I can set up something to ping me via email to remind me when I need to do something.
So I've used a couple services over the years: first, I used iwantsandy.com, which was a pretty cool site. But the company behind it eventually closed down, and shut down the service. So I switched to gopingme.com, which was also pretty good. But now they've decided to discontinue the service. Now I've decided to try out task.fm. I've only just entered in my reminders, but it seemed pretty painless and I assume that the reminders will work as I want. Let's hope that they stay around...
What is it about these products that make them so hard to maintain? gopingme.com mentioned how they spend a disproportionate amount of their time dealing with spam, which I suppose makes sense.
I saw this interesting blog post today about where the Native Client (NaCl) team is going. I thought the idea was really intriguing: you compile your code once to LVM bitcode, and then each browser has a JIT'er that will turn that into fast native code. It seems like a possible natural progression from sending scripts across the web; now, the site owner can do a one-time compile step on their powerful server, and get a lot better performance for their clients. Cool.
But then I thought -- wait isn't this like Java? The promise of Java was that you could take your code, compile it once, and then run the resulting bytecode anywhere. I think there are a couple things that NaCl can and will do better than Java: better security model (I think the problem with Java was that developers were limited to writing "Applets", which wasn't great), and better performance. But neither of those seem like they're fundamental to the two designs; it seems like you should be able to design a good JIT'er for Java bytecode almost as easily as for LVM bitcode, and it seems like the security of the two systems could be made just as good. Maybe it's just the right time for NaCl, or maybe Google/the open source community will execute better. Or maybe NaCl will meet the same fate as Java on the web.
Seriously. How often do you now get emails from people that clearly mean that they have fallen victim either to a facebook hack or facebook scam? This is like 1999 all over again, but with facebook instead of email.
When will they ever get it right? Every new update crashes my iPhone. And this time a simple (hah!) restore isn't even good enough. My iPhone is really messed up now.
My iPhone crashes every other sync. So I'm not sure what I should try to do -- not sync it, and keep it from crashing, or sync it more, to make the backups more recent, when I inevitably have to restore from it.
Jeez, iTunes + iPhone OS is a bunch of crap.