아주 귀찮은 사람(것), 골칫거리, 눈엣가시

 It`s a pain in the neck.
정말 지겨운 일이군요.

What a problem! 또는 What a pain in the neck 
by 쿠리다쿠리 2011. 7. 5. 17:59

분위기를 깨다, 흥을 깨다, 분위기에 찬물을 끼얹다

 
# 미주 대륙의 인디언들은 캠프파이어를 한 후 불을 끌 때 물에 적신 담요를 그 위에 덮었다고 한다. 이런 관습에서 유래한 wet blanket이란 표현은 '분위기를 깨다, 흥을 깨다, 분위기에 찬물을 끼얹다'와 같은 뜻으로 쓰게 되었다.

[출처] wet blanket|작성자 아르벨라 

 
He threw(put) a wet blanket on my birthday party. : 그는 내 생일에 흥을 깨 버렸다.
He has a mane for being a wet blanket : 걔는 분위기 깨는걸로 유명하잖아?
Don't be such a wet blanket  : 분위기 망치는 사람 되지 마라. 분위기 망치지좀 마.


 

by 쿠리다쿠리 2011. 7. 5. 17:56

명사

1.자신(自信)
2.자신 과잉
  • Self-confidence is the most important key to success.
     
    자신감이 성공의 지름길이다.
  • This allows kids to build up self-confidence.
     
    이것은 어린아이들이 자신감을 형성하도록 허락해 준다.
by 쿠리다쿠리 2011. 7. 5. 17:48
자신만만하고 늠름한 사람, 강인한 사람, 만만치 않은 사람

She has become a tough cookie after being harassed by her fellow workers. (그녀는 동료들에게 시달리고 나더니 강해졌어.)


A: I heard Nancy and James broke up, I wonder how James is doing.
B: Don't worry about James. He is a tough cookie.
    But Nancy is probably broken into pieces. 

< 원문 : http://venticle.blog.me/40132765688>
 

by 쿠리다쿠리 2011. 7. 5. 17:44
에테시아바람( Etesian wind)

연풍 breeze 軟風

탁월풍 prevailing wind 卓越風

일반풍 general wind 一般風

지균풍 geostrophic wind 地均風

진선풍 dust whirl, sand whirl 塵旋風

골바람 valley wind

파랑 wave 波浪

황사현상 yellow sand phenomenon 黃砂現象

편동풍 easterlies 偏東風

해륙풍 land and sea breeze 海陸風

허리케인 hurricane

국지풍 local wind 局地風

난기류 turbulent air 亂氣流

돌풍 gust 突風

반대계절풍 antimonsoon 反對季節風

반대무역풍 antitrade wind 反對貿易風

무역풍 trade wind 貿易風

산곡풍 mountain and valley winds 山谷風

상층바람 upper wind 上層-

선형풍 cyclostrophic wind 旋衡風

극동풍 polar easterlies 極東風

태풍 typhoon 颱風

토네이도(Tornado) 
by 쿠리다쿠리 2011. 6. 30. 19:48
Harmonic Mean은 주어진 수들의 역수의 산술 평균의 역수를 말하며 평균적인 변화율을 구할 때에 주로 사용된다.

실수 a1, ..., an이 주어졌을 때, 조화 평균 H는

 1/H=1/nsum_(i=1)^n1/(x_i).  또는 H = \frac{n}{\frac{1}{a_1} + \frac{1}{a_2} + \cdots + \frac{1}{a_n}} 로 주어진다. 

예를들어 평균속력을 구할 때, 속력들의 중간값을 구하지 않고, 총 이동거리를 총 경과시간으로 나누는데 이는 속력과 시간이 서로 반비례하여 영향을 주기 때문이다.
(속력이 높아지면 소요시간은 작아진다.)

마찬가지로 병렬연결된 저항의 값을 구하는데 있어서도 Harmonic Mean을 사용한다.
이때도 병렬로 저항이 많이 연결되면 될수록 평균 저항값은 작아직다..

by 쿠리다쿠리 2011. 6. 21. 19:25

CAMSHIFT Algorithm

The CAMSHIFT algorithm is based on the MEAN SHIFT algorithm. The MEAN SHIFT algorithm works well on static probability distributions but not on dynamic ones as in a movie. CAMSHIFT is based principles of the MEAN SHIFT but also a facet to account for these dynamically changing distributions.

CAMSHIFT's is able to handle dynamic distributions by readjusting the search window size for the next frame based on the zeroth moment of the current frames distribution. This allows the algorithm to anticipate object movement to quickly track the object in the next scene. Even during quick movements of an object, CAMSHIFT is still able to correctly track.

The CAMSHIFT algorithm is a variation of the MEAN SHIFT algorithm.

CAMSHIFT works by tracking the hue of an object, in this case, flesh color. The movie frames were all converted to HSV space before individual analysis.

CAMSHIFT was implemented as such:
1. Initial location of the 2D search window was computed.
2. The color probability distribution is calculated for a region slightly bigger than the mean shift search window.
3. Mean shift is performed on the area until suitable convergence. The zeroth moment and centroid coordinates are computed and stored.
4. The search window for the next frame is centered around the centroid and the size is scaled by a function of the zeroth movement.
5. Go to step 2. 

The initial search window was determined by inspection. Adobe Photoshop was used to determine its location and size. The inital window size was just big enough to fit most of the hand inside of it. A window size too big may fool the tracker into tracking another flesh colored object. A window too small will mostly quickly expand to an object of constant hue, however, for quick motion, the tracker may lock on the another object or the background. For this reason, a hue threshold should be utilized to help ensure the object is properly tracked, and in the event that an object with mean hue not of the correctly color is being tracked, some operation can be performed to correct the error. 

For each frame, its hue information was extracted. We noted that the hue of human flesh has a high angle value. This simplified our tracking algorithm as the probability that a pixel belonged to the hand decreased as its hue angle did. Hue thresholding was also performed to help filter out the background make the flesh color more prominent in the distributions.

The zeroth moment, moment for x, and moment for y were all calculated. The centroid was then calculated from these values.

xc = M10 / M00; yc = M01 / M00

The search window was then shifted to center the centroid and the mean shift computed again. The convergence threshold used was T=1. This ensured that we got a good track on each of the frames. A 5 pixel expansion in each direction of the search window was done to help track movement.

Once the convergent values were computed for mean and centroid, we computed the new window size. The window size was based on the area of the probability distribution. The scaling factor used was calculated by:

s = 1.1 * sqrt(M00)

The 1.1 factor was chosen after experimentation. A desirable factor is one that does not blow up the window size too quickly, or shrink it too quickly. Since the distribution is 2D, we use the sqrt of M00 to get the proper length in a 1D direction.

The new window size was computed with this scaling factor. It was noted that the width of the hand object was 1.2 times greater than the height. This was noted and the new window size was computed as such:

W = [ (s)   (1.2*s) ]

The window is centered around the centroid and the computation of the next frame is started






Figure 2 - Probability distribution of skin. High intensity values represent high probability of skin. The search window and centroid are also superimposed on each frame. The frames are in sequence from top to bottom in each row. The frames displayed are 0, 19, 39, 59, 79.

D. Conclusion

Object tracking is a very useful tool. Object can be tracked many ways including by color or by other features.

Tracking objects by difference frames is not always robust enough to work in every situation. There must be a static background and constant illumination to get great results. With this method, object can be tracked in only situations with transient optical flow. If the pixel values don't change, no motion will be detected.

The CAMSHIFT is a more robust way to track an object based on its color or hue. It is based after the MEAN SHIFT algorithm. CAMSHIFT improves upon MEAN SHIFT by accounting for dynamic probability distributions. It scales the search window size for the next frame by a function of the zeroth moment. In this way, CAMSHIFT  is very robust for tracking objects.

There are many variables in CAMSHIFT. One must decide suitable thresholds and search window scaling factors. One must also take into account uncertainties in hue when there is little intensity to a color. Knowing your distributions well helps to enable one to pick scaling values that help track the correct object. 

In any case, CAMSHIFT works well in tracking flesh colored objects. These object can be occluded or move quickly and CAMSHIFT usually corrects itself.

E. Appendix 
 

Source Code

Movies

The hand tracking movies have the follow format parameters:

Fps: 15.0000
Compression: 'Indeo3'
Quality: 75
KeyFramePerSec: 2.1429

Automatically updated parameters:
TotalFrames: 99
Width: 320
Height: 240
Length: 0
ImageType: 'Truecolor'
CurrentState: 'Closed'

F. 기타
CAM Shift Original Paper : Intel Technical Paper
http://isa.umh.es/pfc/rmvision/opencvdocs/papers/camshift.pdf


Hough 등 기타 Computer Vision Project Source/Tutorial 아래의 링크 참조
http://www.gergltd.com/cse486/
 

by 쿠리다쿠리 2011. 6. 21. 11:37
정리하면
Mean Shift는 non-parametric clustering 알고리즘 또는 과정으로 K-means Clustering(Duda, Hart & Stork, 2001)에 비해 Shape of Distribution, 클러스터의 개수 등에 대한 사전 정보를 요구하지 않는점이 가장 큰 차이점 이다.

원문:
Mean shift represents a general non-parametric mode finding/clustering procedure. In contrast to the classic K-means clustering approach (Duda, Hart & Stork,
2001), there are no embedded assumptions on the shape of the distribution nor the
number of modes/clusters.
(http://www.cse.yorku.ca/~kosta/CompVis_Notes/mean_shift.pdf 참조)

K-means Clustering Tutorial은 아래의 링크 참조
( http://www.autonlab.org/tutorials/kmeans11.pdf )

기타 기본 개념은 Andrew Moore의 Statistical Data Mining Tutorial 참조
(http://www.autonlab.org/tutorials/index.html)


by 쿠리다쿠리 2011. 6. 20. 22:25

nomenclature [númənklèitʃəːr, nouménklə-] n.
(조직적) 명명법(특히 전문적인 학문의); 「집합적」 전문어, 술어; (분류학적) 학명.
㉺nomenclatural [nòumənkléiʧərəl] ―a
by 쿠리다쿠리 2010. 12. 9. 21:40

From Wikipedia, the free encyclopedia

Empirical risk minimization (ERM) is a principle in statistical learning theory which defines a family of learning algorithms and is used to give theoretical bounds on the performance of learning algorithms.

Background

Consider the following situation, which is a general setting of many supervised learning problems. We have two spaces of objects X and Y and would like to learn a function \! h: X \to Y (often called hypothesis) which outputs an object y \in Y, given x \in X. To do so, we have in our disposal a training set of a few examples \! (x_1, y_1), \ldots, (x_m, y_m) where x_i \in X is an input and y_i \in Y is the corresponding response that we wish to get from \! h(x_i).

To put it more formally, we assume that there is a joint probability distribution P(x,y) over X and Y, and that the training set consists of m instances \! (x_1, y_1), \ldots, (x_m, y_m) drawn i.i.d. from P(x,y). Note that the assumption of a joint probability distribution allows us to model uncertainty in predictions (e.g. from noise in data) because y is not a deterministic function of x, but rather a random variable with conditional distribution P(y | x) for a fixed x.

We also assume that we are given a non-negative real-valued loss function L(\hat{y}, y) which measures how different is the prediction \hat{y} of a hypothesis from the true outcome y. The risk associated with hypothesis h(x) is then defined as the expectation of the loss function:

R(h) = \mathbf{E}[L(h(x), y)] = \int L(h(x), y)\,dP(x, y).

A loss function commonly used in theory is the 0-1 loss function: L(\hat{y}, y) = I(\hat{y} \ne y), where I(...) is the indicator notation.

The ultimate goal of a learning algorithm is to find a hypothesis \! h^* among a fixed class of functions \mathcal{H} for which the risk R(h) is minimal:

h^* = \arg \min_{h \in \mathcal{H}} R(h).

Empirical risk minimization

In general, the risk R(h) cannot be computed because the distribution P(x,y) is unknown to the learning algorithm (this situation is referred to as agnostic learning). However, we can compute an approximation, called empirical risk, by averaging the loss function on the training set:

\! R_\mbox{emp}(h) = \frac{1}{m} \sum_{i=1}^m L(h(x_i), y_i).

Empirical risk minimization principle states that the learning algorithm should choose a hypothesis \hat{h} which minimizes the empirical risk:

\hat{h} = \arg \min_{h \in \mathcal{H}} R_{\mbox{emp}}(h).

Thus the learning algorithm defined by ERM principle consists in solving the above optimization problem.

[edit] Properties


by 쿠리다쿠리 2010. 12. 9. 19:52
| 1 2 3 4 5 |