ときどきの雑記帖 2012


一つ前へ 2012年7月(下旬)
一つ後へ 2012年8月(中旬)




通算成績出したくないなw また原著を読んでいる間に翻訳本が出るというこのパターン O'Reilly Japan - Think Stats

O'Reilly Japan - Think Stats

Think Stats――プログラマのための統計入門

    Allen B. Downey 著、黒川 洋、黒川 利明 訳
    2012年08月25日 発売予定

自分が買ったが最近だったのでちょっと勘違いしてたけど 原著の発売は去年の7月だったのね

Think Stats - O'Reilly Media

Think Stats
Probability and Statistics for Programmers
By Allen B. Downey
Publisher: O'Reilly Media
Released: July 2011
Pages: 138

原著は電書で買ってるわけなんですが、翻訳本も電書になるのを待つべきかなあ。 でも PDF だろうしなあ (きんどるさんでは読みづらい)


色々中途半端。 ちょこちょこ訳したり放り出したり。 夏休み中に一つ二つは仕上げたいけど… Perfect language #AltDevBlogADay ≫ Programmer Moneyball Ruminations of a Programmer: Does category theory make you a better programmer ? Why OO Sucks by Joe Armstrong The C Programming Language - Part 0 - OnCoding Improving Security in the Latest C Programming Language Standard ≫ SEI Blog PHP is much better than you think - Fabien Potencier Brief History of Spreadsheets, v. 3.6


The Codist

Bad Software, Worse Solutions: Programming Will Always Be Hard

Published: 08/09/2012

Software Runs the World: How Scared Should We Be That So Much of It Is So Bad?

The article is not a big deal but the comments were really interesting, especially all the PhD's
in Computer Science who said following formal methods would make it all better, plus the usual
folks who want regulation and certification and licensing.

All I could think of was "good luck with all that". After 30 years of writing code for
a living I feel secure in saying that there is no silver bullet, no magical method or paper
guarantee that will make all software perfect and bug free.

Eli Bendersky's website » Using sub-generators for lexical scanning in Python

Using sub-generators for lexical scanning in Python

August 9th, 2012 at 2:45 pm

A few days ago I watched a very interesting talk by Rob Pike about writing a non-trivial lexer in
Go. Rob discussed how the traditional switch-based state machine approach is cumbersome to write,
because it’s not really compatible with the algorithm we want to express. The main problem is
that when we return a new token, a traditional state-machine structure forces us to explicitly
pack up the state of where we are and return to the caller. Especially in cases where we just
want to stay in the same state, this makes code unnecessarily convoluted.

Google+ ってやっぱり自分も入っておかないと 更新されたのをタイムリーに見つけられないんですかね。 どうすっかなあ。 何人かGoogle+で主に書いていて追いかけたい人もいるし。

■_ 最短コード

ついったでのやりとり。 ささださんのツイートを見かけて、自分でも考えてみようと思ったのもつかの間 すでに解決していたり。


■_ C82


わたしは土曜(と余力があれば日曜)に行くつもりなのですが RT で流れてきたこれはw

元ネタはこちら。 ボトムズの予告の中でもこれが一番好きなんですよね。 装甲騎兵ボトムズ 第21話「遡行」








Access よくわかんねー



Access でごにょごにょ。 まあ Excelでやるよりは…

■_ ゲルハルト

といったらベルガーですよね(謎) InfoQ にビデオとスライドが上がっていた Faith, Evolution, and Programming Languages なんですがなかなか面白かった(こればっか)。 んが、スライドの最初の方に出てきた Gerhard Gentzen という人物がわからず。 ゲルハルト・ゲンツェン - Wikipedia Faith, Evolution, and Programming Languages | Philip Wadler : programming

ゲルハルト・ゲンツェン - Wikipedia

ゲルハルト・カール・エーリヒ・ゲンツェン(Gerhard Karl Erich Gentzen、1909年11月24日 - 1945年8月4日)
はドイツの論理学者・数学者。 ヘルマン・ワイルとパウル・ベルナイスの弟子。ゲッティンゲン大学でワイル

主要な業績は、自然演繹 NK, NJ とシークエント計算 LK, LJ と呼ばれる証明論の体系の確立である。 自然演
繹の体系は、「自然」の名の通り実際の人間の推論過程に近い直観的で分かりやすい体系である。 一方、シー
ケント計算は、最小限の公理 A → A と、構造および論理結合子に関する推論規則からなる。 NK, LK は古典
論理を扱い、NJ, LJ は直観主義論理を扱う。ゲンツェンはこの LK においてカット除去定理 (基本定理) を
証明した。 この定理は、ある定理を導く論理の道筋には、その定理自身と公理より複雑なものは現れないよう
にできることを示し、 LK の完全性の証明に使われた。 他に純粋算術の無矛盾性証明などの業績がある。

へええ。ところで英語版はこちら → Gerhard Gentzen - Wikipedia, the free encyclopedia

■_ 今日のそれ知らなかった

その2。 wrong, rogue and booklog - これは重みのある言葉。 “That’s the problem with a lot of...An Unexpected Ass Kicking | Blog Of Impossible Things で、最初期のコンピュータ開発で云々とあるのだけど やっぱりお名前に心当たりがない(^^;

An Unexpected Ass Kicking | Blog Of Impossible Things

An Unexpected Ass Kicking

Added Aug 2, 2012, Under: Blogging,Good Stories,Kicking A**,Life,Personal Development

By Joel Runyon

I sat down at yet-another coffee shop in Portland determined to get some work done, catch up
on some emails and write another blog post.

About 30 minutes into my working, an elderly gentleman at least 80 years old sat down next to
me with a hot coffee and a pastry. I smiled at him and nodded and looked back at my computer as
I continued to work.


I invented the first computer.

Um, Excuse me?

I created the world's first internally programmable computer. It used to take up a space about
as big as this whole room and my wife and I used to walk into it to program it.

What's your name?”. I asked, thinking that this guy is either another crazy homeless person in
Portland or legitimately who he said he was.

“Russell Kirsch”


With that, we shook hands, he got up, walked to his car and drove off as I just sat there trying
to figure out what exactly had just happened. As I sat there thinking: two things he said
reverberated in the back of my mind:

    Nothing is withheld from us which we have conceived to do.
    Do things that have never been done.

The first meaning: if you've conceived something in your mind, decide to do it, and are willing
to put in the work – nothing can stop you.

The second is fairly self-explanatory but carries the extra weight of it coming from the guy who
invented the very thing that's letting me type these words out on the internet.

    “Do things that have never been done before” - The guy who invented the computer


Time to step it up.

スタートレック(とくにTNG) のオープニングナレーションの締めを思い出した > “Do thing that have never been done before” スタートレックのあれ大好きなのよー


Russell A. Kirsch - Wikipedia, the free encyclopedia

Russell A. Kirsch

Russell A. Kirsch (born 1929) led a team of colleagues which, between 1947 and 1950, created
America's first internally programmable computer, the Standards Eastern Automatic Computer
(SEAC)[1]. By 1957 Kirsch and his team had invented a scanner which, using the computing power
of SEAC, converted photographs to digital images.[2] This breakthrough created the basis for
satellite imaging, CAT scans, bar codes, and desktop publishing.[3]


While working at the National Bureau of Standards (NBS), Kirsch and his team of colleagues, Wright,
Shupe and Cooper, created America's first internally programmable computer, the Standards Eastern
Automatic Computer (SEAC) which became operable in 1950[1]. Internal memory greatly increased
computer processing power and speed, which allowed for quicker applications development. In 1957
the team unveiled the drum scanner, to “trace variations of intensity over the surfaces of
photographs”, and made the first digital image by scanning a photograph. The digital image,
picturing Kirsch's three month old son, consisted of just 30976 pixels, measuring 5x5cm.[2] They 
used the computer to extract line drawings, count objects, recognize types of characters and produce
oscilloscope displays.[2] Kirsch also proposed the Kirsch operator for edge detection.

時期的にはEDSACとほぼ同時期なのね。しかし SEAC とかいうのも記憶になかった…○| ̄|_




コーディング規約のあれやら 数学ガールの感想やら書こうと思ってたんだけどどどどど

■_ 気になる


Monitoring the Execution of Space Craft Flight Software http://compass.informatik.rwth-aachen.de/ws-slides/havelund.pdf


■_ Rで


For the Stupid Password Rules at Iowa State | (R news & tutorials)

The Fall semester is coming, which means it is time to log into several stupid systems to be
prepared for the new semester. Time and time again I'm annoyed by the bullshit password rules at
Iowa State University. I wrote to the IT staff once but no one ever responded. Here are their

    Must be 8 characters in length
    Must contain at least one letter (a-z, A-Z) or a supported special character (@, #, $ only). All letters are case sensitive.
    Must contain at least one number (0-9)
    Cannot contain spaces or unsupported special characters
    Cannot be re-used after a password expires


Anyway, here is my new password:

  x1 = c(letters, LETTERS) # one letter
  x2 = c('@', '#', '$')    # one special char
  x3 = 0:9                 # one number
  # 8 characters
  p = c(sample(x1, 1), sample(x2, 1), sample(x3, 1),
        sample(c(x1, x2, x3), 5))
  cat(paste(sample(p), collapse = ''), '\n')

I'll probably leave this as an exercise to Stat579 students in the coming semester.

R で規則に合うようなパスワードを生成と。




直接は関係してなかったんですが近くでレビューでの見落としが原因でほげほげ という話がありまして(いずれ書く日が来るかもしれないし来ないかもしれない)。 こういうのうけさせてはくれないだろうなあ → 専門書でもふれられていないソフトウェアレビューの原理・原則 - 日経SYSTEMSのセミナーで解説:森崎修司の「どうやってはかるの?」:ITmedia オルタナティブ・ブログ 日経SYSTEMSスキルアップLIVE 間違いだらけのソフトウエアレビュー 受講料金 ●一般 :38,000円


実際には少し前に出ていたみたいだけど 今日の新刊情報で飛び込んできた Amazon.co.jp: Exploring Everyday Things With R and Ruby: Learning About Everyday Things: Sau Sheong Chang: 洋書

Amazon.co.jp: Exploring Everyday Things With R and Ruby: Learning About Everyday Things: Sau Sheong Chang: 洋書

Programming is not just for geeks. If you're curious about how things work, and want to get
programmatic solutions to everyday problems, this intriguing book will help you find what you're
looking for. By using some fundamental math and simple Ruby and R constructs, you'll learn how to
model the problem and work toward a solution. Discover reasons why things are the way they are by
spelunking data freely available on the Internet. You'll model and simulate environments and
systems, and then observe and analyze the results. See the world around you through programming

Sau Sheong Chang has been in software development, mostly web applications and recently cloud- and
data-related systems, for almost 17 years and is still a keen and enthusiastic programmer. He has
been active in programming with Ruby for the past 6 years and recently started with R for the past
year. He is active in the local developer communities, especially in the Singapore Ruby Brigade.
In April 2011 he co-organized the first and largest Ruby conferences in Southeast Asia, the 





「数学ガール」「それでも自転車に乗りますか」「」 あとで(ry

とあるお店の店頭に置かれていたのに商品名が書かれたラベルが貼ってあったのでそれを手がかりに検索。 傘袋自動装着器「傘ぽん」 / 新倉計量器株式会社 「傘ぽん」 | ヒット商品を支えた知的財産権 | 出版物のご案内 | 日本弁理士会の活動 | 日本弁理士会

「傘ぽん」 | ヒット商品を支えた知的財産権 | 出版物のご案内 | 日本弁理士会の活動 | 日本弁理士会






■_ Rust


Interview on Rust, a Systems Programming Language Developed by Mozilla

Language Developed by Mozilla

Posted by Abel Avram on Aug 03, 2012

Rust is a systems programming language developed by Mozilla and targeted at high performance
applications. This post contains an interview with Graydon Hoare, Rust's creator.

Graydon Hoare, a “language engineer by trade” as he calls himself, started working on a new
programming language called Rust in 2006. Mozilla became interested in this new language,
creating a team to continue its development and started using it for the experimental Servo
Parallel Browser Project.

Rust is a systems language for writing high performance applications that are usually written in
C or C++ but it was developed to prevent some of the problems related to invalid memory accesses
that generate segmentation faults. While Rust's syntax strongly resembles C, there are major
differences, some of its most interesting features being:

    Pattern matching and algebraic data types (enums)
    Task-based concurrency. Lightweight tasks can run in parallel without sharing any memory 
    Higher-order functions (closures)
    Polymorphism, combining Java-like interfaces and Haskell-like type classes
    No buffer overflow
    Immutable by default
    A non-blocking garbage collector

Mozilla has recently released a new alpha version of Rust and there is a roadmap with features
to be implemented in the near future. InfoQ has talked to Hoare to find out more about Rust
from an insider.





■_ コーディング規約

そいやちょっと前に紹介されてた C++のコーディング規約(F-35のソフト開発で使ったやつとか何とか言う)も 結構面白かったんでなんか書こうと思いつつ放置してるなあ

規約本文はこちら(pdf) JPL Institutional Coding Standard for the C Programming Language http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf

NASA JPL C Coding Standard [pdf] | Hacker News

A few summers ago I was an intern at JPL working on a static analysis suite for this exact standard.

Writing code checkers for these sorts of rules is a really interesting exercise and it helped me
grow a lot as a programmer! I went from having no exposure to formal languages, parsing, and
grammars to actively playing around with these concepts to try and help build more reliable
software. It was a humbling, challenging, and incredibly rewarding experience.

Sometimes, a rule is extremely simple to implement. For example, checking a rule that requires that
an assert is raised after every so many lines within a given scope is just a matter of picking the
right sed expression. Other times, you really need an AST to be able to do anything at all.



ああ今日も手抜きだ ○| ̄|_



行ってきました→ LL Decade | 2012年8月4日(土)銀座ブロッサムにて開催

プログラミング言語処理系を自作してわかったこと | LL Decade俺たちの継続的hogehogeは始まったばかりだ! | LL Decade の写真もあるんですが、まあいいかなあと。 一応メモも色々取ったんですけど、なんかまとめなおすのが面倒というか(ry

「夏の祭典」ということで楽しみました。 が、出張販売に来ていたオーム社さん、ノベルティつきでなかったので○| ̄|_

そうそう角谷さんの LT がとても良かったです(言葉でとても表せないくらい :)。

■_ アレ

シェル操作課題 (cut, sort, uniq などで集計を行う) 設問編 - Yamashiro0217の日記 の話。解答編は全く見ずにやってたんだけど(本当に)、先にPowerShellでやってた人が居たのね(^^; いやあああいう風に書けるのねえ。なるほど。 シェル操作課題 (cut, sort, uniq などで集計を行う) 解答編 - Yamashiro0217の日記 「シェル操作課題」をPowerShellでやってみた - PowerShell Scripting Weblog [Power Shell] シェル操作課題への回答 - Pastebin.com

あと解答編のリンク先にあったこの辺も面白げ Ja:ParseLogWithYacq – Yacq twitterで流れてきたシェル操作課題をPSLで解いてみた | 生存確認兼近況報告日記


シェル操作課題 (cut, sort, uniq などで集計を行う) 解答編 - Yamashiro0217の日記



uniq -c 使ってる人あんまりいない印象。

>cat hoge.log

>cat hoge.log | cut -d, -k1,4

>grep server4 hoge.log

>cat hoge.log | wc -l

>cat hoge.log | sort -k1,1 -k3,3n -t, | head -5


sortコマンドがファイルを受け取れるということに気づいた ってあるけど、cut も wc も引数にファイル(の名前)を受け付けるんじゃなかろか。 tr は標準入力からしか入力を取らなかった覚えがあるんだけど 他にもあったかな

パイプラインを組み立てるのに最初を cat file にしておくと そのあとを変更しながら書くのが楽という話はあるんだけど、 それは(省略されました。続きを知りたい場合は…)






コンピュータ書籍店舗別販売冊数(2008-2011) を見ると厳しいねえとしか

Introduction to Algorithms の翻訳版が出るようで。 一巻目はすでに発売されている模様。 今回は三巻目も翻訳されるんだろうか アルゴリズムイントロダクション 第3版 第1巻: 基礎・ソート・データ構造・数学

■_ (式の)構文解析

世の中には珍妙な 自己流の解析方法を発明しちゃってる人もいるみたいですがそれはさておき。 precedence climbing って聞き覚えがないなあと思ったけど コードを追いかけてみるとそうでもなかった?

Eli Bendersky's website » Parsing expressions by precedence climbing

Parsing expressions by precedence climbing

August 2nd, 2012 at 5:48 am

I've written previously about the problem recursive descent parsers have with expressions,
especially when the language has multiple levels of operator precedence.

There are several ways to attack this problem. The Wikipedia article on operator-precedence parsers
mentions three algorithms: Shunting Yard, top-down operator precedence (TDOP) and precedence
climbing. I have already covered Shunting Yard and TDOP in this blog. Here I aim to present the
third method (and the one that actually ends up being used a lot in practice) – precedence climbing.

Wikipedia の演算子優先順位解析のアーティクルでは
Shunting Yard、top-down operator precedence (TDOP)、precedence climbing
Shunting Yard と TDOP についてはすでにこのblogでカバーしました。
今回は三番目の手法 precedence climbing の説明を試みます。

Precedence climbing – what it aims to achieve

It's not necessary to be familiar with the other algorithms for expression parsing in 
order to understand precedence climbing. In fact, I think that precedence climbing is 
the simplest of them all. To explain it, I want to first present what the algorithm is 
trying to achieve. After this, I will explain how it does this, and finally will 
present a fully functional implementation in Python.

precedence climbing を理解するのに他の式解析アルゴリズムに慣れ親しんでいる必要はありません。
実のところわたしはこの precedence climbing が(式解析アルゴリズムの中でも)
そして最後に Python による fully functional な実装を提示します。

So the basic goal of the algorithm is the following: treat an expression as a bunch of 
nested sub-expressions, where each sub-expression has in common the lowest precedence 
level of the the operators it contains.

このアルゴリズムの basic goal は次の通りです:
(ひとつの) 式を、それぞれが共通の最低優先順位を持ちネストした複数の sub 式の集まりとみなす。

Here's a simple example:

  2 + 3 * 4 * 5 - 6

Assuming that the precedence of + (and -) is 1 and the precedence of * (and /) is 2, we have:
ここで、+ と - の優先度が1で * と / の優先度が2とします。

  2 + 3 * 4 * 5 - 6

  |---------------|   : prec 1
      |-------|       : prec 2

The sub-expression multiplying the three numbers has a minimal precedence of 2. The 
sub-expression spanning the whole original expression has a minimal precedence of 1.

この sub 式は 2 という minimal precedence を持った三つの数値を乗じます。
そしてその sub 式は 1 という minimal precedece を持っている元々の式全体に spanningします。

Here's a more complex example, adding a power operator ^ with precedence 3:
もう少し複雑な例を出しましょう。優先度 3を持つ巾乗演算子 ^ を追加します

  2 + 3 ^ 2 * 3 + 4

  |---------------|   : prec 1
      |-------|       : prec 2
      |---|           : prec 3


Binary operators, in addition to precedence, also have the concept of associativity. Simply put,
left associative operators stick to the left stronger than to the right; right associative
operators vice versa.


Some examples. Since addition is left associative, this:

  2 + 3 + 4

Is equivalent to this:

  (2 + 3) + 4

On the other hand, power (exponentiation) is right associative. This:

  2 ^ 3 ^ 4

Is equivalent to this:

  2 ^ (3 ^ 4)

The precedence climbing algorithm also needs to handle associativity correctly.

precedence climbing アルゴリズムでも結合性を適切に扱う必要があります。

Nested parenthesized sub-expressions
ネストしたカッコ付きの sub 式

Finally, we all know that parentheses can be used to explicitly group sub-expressions, beating
operator precedence. So the following expression computes the addition before the multiplication:

わたしたちはカッコ対が演算子の優先順位を無視して explicitly にsub 式をグループ化するのに

  2 * (3 + 5) * 7

As we'll see, the algorithm has a special provision to cleverly handle nested sub-expressions.
見ての通り、このアルゴリズムにはネストしている sub 式を
cleverly に hanle するための special provision があります。

Precedence climbing – how it actually works

First let's define some terms. Atoms are either numbers or parenthesized expressions. Expressions
consist of atoms connected by binary operators [1]. Note how these two terms are mutually
dependent. This is normal in the land of grammars and parsers.

アトム (atom) とは数値かあるいはカッコで囲まれた式のいずれかです。
これら二つの用語は mutually dependent であることに注意して下さい。
This is normal in the land of grammars and parsers.

The algorithm is operator-guided. Its fundamental step is to consume the next atom and look at the
operator following it. If the operator has precedence lower than the lowest acceptable for the
current step, the algorithm returns. Otherwise, it calls itself in a loop to handle the
sub-expression. In pseudo-code, it looks like this [2]:

この Precedence climbing アルゴリズムは operator-guided です。
その fundamental step は次のアトムを消費し、それに続く演算子に注目するというものです。
注目した演算子が current step に対する lowest acceptable な precedence よりも
低い precedence であった場合、アルゴリズムは return します。
そうでない場合には sub 式を handle するためにループの中で自分自身を呼び出します。

      result = compute_atom()

      while cur token is a binary operator with precedence >= min_prec:
          prec, assoc = precedence and associativity of current token
          if assoc is left:
              next_min_prec = prec + 1
              next_min_prec = prec
          rhs = compute_expr(next_min_prec)
      result = compute operator(result, rhs)

  return result

Each recursive call here handles a sequence of operator-connected atoms sharing the same minimal

上記のコードの再帰呼び出しのそれぞれでは同一の minimql precedence を共有している operator-donnected
なアトムの並びを handle します

An example

To get a feel for how the algorithm works, let's start with an example:

  2 + 3 ^ 2 * 3 + 4

It's recommended to follow the execution of the algorithm through this expression with, on paper.
The computation is kicked off by calling compute_expr(1), because 1 is the minimal operator
precedence among all operators we've defined. Here is the "call tree" the algorithm
produces for this expression:

It's recommended to follow the execution of the algorithm through this expression with, on paper.
わたしたちが定義した演算子全ての中では 1 が minimal operator precedece なので、
計算は compute_exp(1) の呼び出しで開始されます。

  * compute_expr(1)                # Initial call on the whole expression
    * compute_atom() --> 2
    * compute_expr(2)              # Loop entered, operator '+'
      * compute_atom() --> 3
      * compute_expr(3)
        * compute_atom() --> 2
        * result --> 2             # Loop not entered for '*' (prec < '^')
      * result = 3 ^ 2 --> 9
      * compute_expr(3)
        * compute_atom() --> 3
        * result --> 3             # Loop not entered for '+' (prec < '*')
      * result = 9 * 3 --> 27
    * result = 2 + 27 --> 29
    * compute_expr(2)              # Loop entered, operator '+'
      * compute_atom() --> 4
      * result --> 4               # Loop not entered - end of expression
    * result = 29 + 4 --> 33

Handling precedence

Note that the algorithm makes one recursive call per binary operator. Some of these calls are short
lived – they will only consume an atom and return it because the while loop is not entered (this
happens on the second 2, as well as on the second 3 in the example expression above). Some are
longer lived. The initial call to compute_expr will compute the whole expression.

そういった呼び出しの一部は short lived です -
while ループには入らないのでただ一つだけアトムを消費してリターンします
(this happens on the second 2, as well as on the second 3 in the example expression above).
compute_expr の initial call は式全体を計算します。

The while loop is the essential ingredient here. It's the thing that makes sure that the current
compute_expr call handles all consecutive operators with the given minimal precedence before

ここで、この while ループは essential ingredient です。
これは exit する前に current の compute_expr の呼び出しが、
与えられた minimal precedence を持ったすべての consecutive operators を
handle することを確実にするものです。

Handling associativity

In my opinion, one of the coolest aspects of this algorithm is the simple and elegant way it
handles associativity. It's all in that condition that either sets the minimal precedence for the
next call to the current one, or current one plus one.

わたしの意見ではこのアルゴリズムの coolest aspects のひとつは、
これが結合性をハンドルする単純かつ elegant な方法であるというものです。
It's all in that condition that either sets the minimal precedence for the
next call to the current one, or current one plus one.

Here's how this works. Assume we have this sub-expression somewhere:

次のようなsub 式があると仮定します。

  8 * 9 * 10


The arrow marks where the compute_expr call is, having entered the while loop. prec is 2. Since
the associativity of * is left, next_min_prec is set to 3. The recursive call to compute_expr(3),
after consuming an atom, sees the next * token:

ここで矢印は enter した while ループで compute_expr を呼び出しているのがどこなのかを示しています。
* は左結合なので next_min_prec には 3 がセットされます。
この compute_expr(3) に対する再帰呼び出しは、
アトムを消費したあとで次のトークン * に注目します。

  8 * 9 * 10


Since the precedence of * is 2, while min_prec is 3, the while loop never runs and the call
returns. So the original compute_expr will get to handle the second multiplication, not the
internal call. Essentially, this means that the expression is grouped as follows:

min_prec の優先度が 3なのに対して * の 優先度は 2 なので、
while ループは実行されずに呼び出し元に返ります。
結果として、元の compute_expr は内部呼び出しではなく二番目の乗算を handle しに行きます。

  (8 * 9) * 10

Which is exactly what we want from left associativity.


In contrast, for this expression:

  8 ^ 9 ^ 10

The precedence of ^ is 3, and since it's right associative, the min_prec for the recursive call
stays 3. This will mean that the recursive call will consume the next ^ operator before returning
to the original compute_expr, grouping the expression as follows:

^ の優先度は 3 であり、右結合するので再帰呼び出しに対する min_prec は 3 のままです。
再帰呼び出しが元の compute_expr へ帰る前に次の ^ 演算子を消費することを意味します

  8 ^ (9 ^ 10)

Handling sub-expressions
sub 式の handling

The algorithm pseudo-code presented above doesn't explain how parenthesized sub-expressions are
handled. Consider this expression:

カッコで囲まれた sub 式がどのように handleされるかを説明されていません。

  2000 * (4 - 3) / 100

It's not clear how the while loop can handle this. The answer is compute_atom. When it sees a left
paren, it knows that a sub-expression will follow, so it calls compute_expr on the sub expression
(which lasts until the matching right paren), and returns its result as the result of the atom. So
compute_expr is oblivious to the existence of sub-expressions.

while ループでこのカッコ対をどのように handle できるのかは明確になっていません。
その答えは compute_atom にあります。
while ループで左カッコを見つけたときにはそのあとに sub 式が続くことがわかりますから、
その sub 式に対して cpmpute_one を呼び出します
(sub 式は対応する右カッコまで続きます)。
そして、sub 式の結果として atom の結果を返します。
そのため、compute_expr は sub 式の存在に気がつかないのです。

Finally, in order to stay short the pseudo-code leaves some interesting details out. What follows
is a full implementation of the algorithm that fills all the gaps.

A Python implementation
Python による実装

Here is a Python implementation of expression parsing by precedence climbing. It's kept short for
simplicity, but can be be easily expanded to cover a more real-world language of expressions. The
following sections present the code in small chunks. The whole code is available here.

以下は Python による precedence climbing を使った式解析の実装です。

I'll start with a small tokenizer class that breaks text into tokens and keeps a state. 
The grammar is very simple: numeric expressions, the basic arithmetic operators +, -, 
*, /, ^ and parens – (, ).

テキストをトークンに分解して状態を保持する小さな tokenizer クラスから始めましょう。
文法は非常にシンプルなもので、数値式、基本的な算術演算子 (+, -, *, /)、カッコです。

  Tok = namedtuple('Tok', 'name value')
  class Tokenizer(object):
      """ Simple tokenizer object. The cur_token attribute holds the current
          token (Tok). Call get_next_token() to advance to the
          next token. cur_token is None before the first token is
          taken and after the source ends.
      TOKPATTERN = re.compile("\s*(?:(\d+)|(.))")
      def __init__(self, source):
          self._tokgen = self._gen_tokens(source)
          self.cur_token = None
      def get_next_token(self):
          """ Advance to the next token, and return it.
              self.cur_token = self._tokgen.next()
          except StopIteration:
              self.cur_token = None
          return self.cur_token
      def _gen_tokens(self, source):
          for number, operator in self.TOKPATTERN.findall(source):
              if number:
                  yield Tok('NUMBER', number)
              elif operator == '(':
                  yield Tok('LEFTPAREN', '(')
              elif operator == ')':
                  yield Tok('RIGHTPAREN', ')')
                  yield Tok('BINOP', operator)

Next, compute_atom:

  def compute_atom(tokenizer):
      tok = tokenizer.cur_token
      if tok.name == 'LEFTPAREN':
          val = compute_expr(tokenizer, 1)
          if tokenizer.cur_token.name != 'RIGHTPAREN':
              parse_error('unmatched "("')
          return val
      elif tok is None:
              parse_error('source ended unexpectedly')
      elif tok.name == 'BINOP':
          parse_error('expected an atom, not an operator "%s"' % tok.value)
          assert tok.name == 'NUMBER'
          return int(tok.value)
It handles true atoms (numbers in our case), as well as parenthesized sub-expressions.

Here is compute_expr itself, which is very close to the pseudo-code shown above:

  # For each operator, a (precedence, associativity) pair.
  OpInfo = namedtuple('OpInfo', 'prec assoc')
      '+':    OpInfo(1, 'LEFT'),
      '-':    OpInfo(1, 'LEFT'),
      '*':    OpInfo(2, 'LEFT'),
      '/':    OpInfo(2, 'LEFT'),
      '^':    OpInfo(3, 'RIGHT'),
  def compute_expr(tokenizer, min_prec):
      atom_lhs = compute_atom(tokenizer)
      while True:
          cur = tokenizer.cur_token
          if (cur is None or cur.name != 'BINOP'
                          or OPINFO_MAP[cur.value].prec < min_prec):
          # Inside this loop the current token is a binary operator
          assert cur.name == 'BINOP'
          # Get the operator's precedence and associativity, and compute a
          # minimal precedence for the recursive call
          op = cur.value
          prec, assoc = OPINFO_MAP[op]
          next_min_prec = prec + 1 if assoc == 'LEFT' else prec
          # Consume the current token and prepare the next one for the
          # recursive call
          atom_rhs = compute_expr(tokenizer, next_min_prec)
          # Update lhs with the new value
          atom_lhs = compute_op(op, atom_lhs, atom_rhs)
      return atom_lhs
The only difference is that this code makes token handling more explicit. It basically follows
the usual "recursive-descent protocol". Each recursive call has the current token
available in tokenizer.cur_tok, and makes sure to consume all the tokens it has handled (by calling

唯一の違いはこのコードではより eplicit にトークンを handle するということです。
再帰呼び出しのそれぞれは tokenizer.cur_tok に availabe な current token を保持していて、
(tokenizer.get_next_token() の呼び出しによって)

One additional small piece is missing. compute_op simply performs the arithmetic computation for the
supported binary operators:

  def compute_op(op, lhs, rhs):
      lhs = int(lhs); rhs = int(rhs)
      if op == '+':   return lhs + rhs
      elif op == '-': return lhs - rhs
      elif op == '*': return lhs * rhs
      elif op == '/': return lhs / rhs
      elif op == '^': return lhs ** rhs
          parse_error('unknown operator "%s"' % op)

In the real world – Clang

Precedence climbing is being used in real world tools. One example is Clang, the C/C++/ObjC
front-end. Clang's parser is hand-written recursive descent, and it uses precedence climbing for
efficient parsing of expressions. If you're interested to see the code, it's
Parser::ParseExpression in lib/Parse/ParseExpr.cpp [3]. This method plays the role of compute_expr.
The role of compute_atom is played by Parser::ParseCastExpression.

Other resources

Here are some resources I found useful while writing this article:

    The Wikipedia page for Operator-precedence parsing.
    The article by Keith Clarke (PDF), who apparently came up with the technique.
    This page by Theodore Norvell, about parsing expressions by recursive descent.
    The Clang source code (exact locations given in the previous section).

[1] 	There are a couple of simplifications made here on purpose. First, I assume only numeric expressions. Identifiers that represent variables can also be viewed as atoms. Second, I ignore unary operators. These are quite easy to incorporate into the algorithm by also treating them as atoms. I leave them out for succinctness.
[2] 	In this article I present a parser that computes the result of a numeric expression on-the-fly. Modifying it for accumulating the result into some kind of a parse tree is trivial.
[3] 	Clang's source code is constantly in flow. This information is correct at least for the date the article was written.

Related posts:

    Top-Down operator precedence parsing
    Some problems of recursive descent parsers
    A recursive descent parser with an infix expression evaluator
    Logical operators in Perl and Ruby
    Some thoughts on JSON vs. S-expressions

This entry was posted on Thursday, August 2nd, 2012 at 05:48 and is filed under Articles,
Recursive descent parsing. You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.


■_ 起源

Rob Pike - Google+ - A lesson in shortcuts. Long ago, as the design of the Unix… Rob Pike on the origin of dotfiles : programming



半額。とはいうものの今欲しいのがあるかというとちと微妙な気も Save 50% on Perl Ebooks - Deals - O'Reilly Media


シェル操作課題 (cut, sort, uniq などで集計を行う) 設問編 - Yamashiro0217の日記 を PowerShell でやってみたの続き(含書き直し)。 もっと簡潔にできそうな気もするんだけどよくわからんw あと、cut 相当の関数をあらかじめ書いておいた方が良いかも。 -split 使った同様のパターンが頻出してるし。

問1 このファイルを表示しろ

PS >Get-Content .\input.txt

問2 このファイルからサーバー名とアクセス先だけ表示しろ

PS > Get-Content .\input.txt|%{(($_ -split ",")[0,3]) -join ","}
問3 このファイルからserver4の行だけ表示しろ

PS > Get-Content .\input.txt|where {$_ -match"^server4"}

問4 このファイルの行数を表示しろ
PS > Get-Content .\input.txt|Measure-Object|%{$_.count}

問5 このファイルをサーバー名、ユーザーIDの昇順で5行だけ表示しろ

PS > Get-Content .\input.txt| Sort-Object -Property @{Expression={($_ -split ",")[0]}; Ascending=$true},@{Expression={[int]($_-split ",")[2]}; Ascending=$true}| Select-Object -first 5
問6 このファイルには重複行がある。重複行はまとめて数え行数を表示しろ

PS > Get-Content .\input.txt|group-object|Measure-Object|%{$_.count}

問7 このログのUU(ユニークユーザー)数を表示しろ

PS > Get-Content .\input.txt| %{($_ -split ",")[2]}|Group-Object|Measure-Object|%{$_.Count}

問8 このログのアクセス先ごとにアクセス数を数え上位1つを表示しろ

PS > Get-Content .\input.txt| %{($_ -split ",")[3]}|group-object|sort-object -property {$_.count} -Descending | select-object -first 1|%{$_.count.tostring() + "," + $_.name}

問9 このログのserverという文字列をxxxという文字列に変え、サーバー毎のアクセス数を表示しろ

PS > Get-Content .\input.txt| %{(($_ -split ",")[0]).replace("server", "xxx")}|group-object|%{$_.count.tostring() + " " + $_.name}
3 xxx1
3 xxx2
2 xxx3
1 xxx4

問10 このログのユーザーIDが10以上の人のユニークなユーザーIDをユーザーIDでソートして表示しろ

PS > Get-Content .\input.txt| %{(($_ -split ",")[2])}|where {[int]$_ -gt 10}|sort-object {[int]$_} -unique
PS > Get-Content .\input.txt| %{(($_ -split ",")[2])}|where {[int]$_ -ge 10}|sort-object {[int]$_} -unique

複数のキーを指定してソートさせるのがなかなかうまくいかなくて悩んだ。 数値で比較して欲しいのに文字列比較するとか。


Eli Bendersky's website » Parsing expressions by precedence climbing







■_ Tips on Emacs Lisp programming

いろいろ。 納得できるものもそうでないものも。

Tips on Emacs Lisp programming

Tips on Emacs Lisp programming

I think EmacsLisp is getting to be a great application base, a really good language and environment
to write programs in, not just a fancy editor. A number of people seem to agree and are trying it out.

Here's some tips and tricks distilled from my 15 years of using EmacsLisp to help budding Lisp
hackers in Emacs.

Do use a modern Emacs

The latest version of Emacs is 24. It's not added to a whole lot of operating system repositories
yet but for Debian and Ubuntu you can get it easily by using this repository or PPA at

Using the latest version will give you access to lexical scope, built in packaging and lots more goodness.

I am working on a yum equivalent of that APT repository. Donating might make that quicker.


Copyright (C) 2009,2010,2011 Nic Ferrier


Xah Emacs Lisp Tutorial


■_ Statistics of Software Developers

いろいろ訊いてみた。らしい。 Software Developer Statistics (a day in your life) : programming Response summary - [ Statistics of Software Developers (a day in your life) ] - Google Docs


How many hours do you spend doing each activity - Coding (developing code, testing or debugging)

0 minutes		13	0%
5 to 30 minutes		27	1%
30 minutes to 1 hour		70	2%
1 to 2 hours		325	10%
2 to 6 hours or more		2653	84%

How many hours do you spend doing each activity - Talking to clients

0 minutes		1483	47%
5 to 30 minutes		822	26%
30 minutes to 1 hour		394	13%
1 to 2 hours		299	10%
2 to 6 hours or more		51	2%

How many hours do you spend doing each activity - Meetings (with team, boss, coworker, scrum, etc)

0 minutes		133	4%
5 to 30 minutes		1196	38%
30 minutes to 1 hour		988	31%
1 to 2 hours		641	20%
2 to 6 hours or more		143	5%

How many hours do you spend doing each activity - Lunch

0 minutes		77	2%
5 to 30 minutes		762	24%
30 minutes to 1 hour		1846	59%
1 to 2 hours		401	13%
2 to 6 hours or more		9	0%




シェル操作課題 (cut, sort, uniq などで集計を行う) 設問編 - Yamashiro0217の日記


C:\Users\kbk\My Dropbox\work\0801>powershell
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.

PS C:\Users\kbk\My Dropbox\work\0801> Get-Content .\input.txt
PS C:\Users\kbk\My Dropbox\work\0801> select-string -pattern "server4" input.txt


PS C:\Users\kbk\My Dropbox\work\0801> get-content input.txt |select-string -pattern "server4"


PS C:\Users\kbk\My Dropbox\work\0801> get-content input.txt |Measure-Object

Count    : 9
Average  :
Sum      :
Maximum  :
Minimum  :
Property :

PS C:\Users\kbk\My Dropbox\work\0801> get-content input.txt |Measure-Object -line

              Lines Words               Characters          Property
              ----- -----               ----------          --------

PS C:\Users\kbk\My Dropbox\work\0801> get-content input.txt |Group-Object

Count Name                      Group
----- ----                      -----
    1 server1,1343363124,30,... {server1,1343363124,30,/video.php}
    2 server2,1343363110,20,... {server2,1343363110,20,/profile.php, server2...
    1 server3,1343363115,7,/... {server3,1343363115,7,/login.php}
    1 server1,1343363105,8,/... {server1,1343363105,8,/profile.php}
    1 server2,1343363205,35,... {server2,1343363205,35,/profile.php}
    1 server3,1343363205,30,... {server3,1343363205,30,/login.php}
    1 server4,1343363225,12,... {server4,1343363225,12,/video.php}
    1 server1,1343363265,7,/... {server1,1343363265,7,/video.php}

PS C:\Users\kbk\My Dropbox\work\0801> get-content input.txt |Group-Object|Measure-Object

Count    : 8
Average  :
Sum      :
Maximum  :
Minimum  :
Property :

PS C:\Users\kbk\My Dropbox\work\0801> Get-Content .\input.txt | %{ $x=$_ -split","; $x[0,3] -join ","}

Unix のツールのようにシンプルな結果にするのが面倒…

一つ前へ 2012年7月(下旬)
一つ後へ 2012年8月(中旬)



メールの宛先はこちらkbk AT kt DOT rim DOT or DOT jp