text stringlengths 256 65.5k |
|---|
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Simple channel indicator
Hi
I have tried to write a simple channel indicator. The indicator has 2 lines. The maximum of the last 20 Highs and the minimum of the last 20 lows.
Here is my attempt.
class channel20(bt.Indicator):
lines = ('maxi','mini',)
params = (('period', 20),)
def __init__(self):
self.l.maxi= math.max(self.data.high(p.period))
self.l.mini= math.min(self.data.low(p.period))
putting it into a simple strategy to see if it works, I tried this:
from datetime import datetime
import backtrader as bt
class channel20(bt.Indicator):
lines = ('maxi','mini',)
params = (('period', 20),)
def __init__(self):
self.l.maxi= math.max(self.data.high(p.period))
self.l.mini= math.min(self.data.low(p.period))
class test(bt.SignalStrategy):
def __init__(self):
channel = bt.indicator.channel20
cerebro = bt.Cerebro()
cerebro.addstrategy(test)
data0 = bt.feeds.YahooFinanceData(dataname='YHOO', fromdate=datetime(2011, 1, 1),
todate=datetime(2012, 12, 31))
cerebro.adddata(data0)
cerebro.run()
cerebro.plot()
to get the following error.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-d93cd4afba9e> in <module>()
23 cerebro.adddata(data0)
24
---> 25 cerebro.run()
26 cerebro.plot()
/home/rory/anaconda2/envs/backtrader/lib/python2.7/site-packages/backtrader/cerebro.pyc in run(self, **kwargs)
1068 # let's skip process "spawning"
1069 for iterstrat in iterstrats:
-> 1070 runstrat = self.runstrategies(iterstrat)
1071 self.runstrats.append(runstrat)
1072 else:
/home/rory/anaconda2/envs/backtrader/lib/python2.7/site-packages/backtrader/cerebro.pyc in runstrategies(self, iterstrat, predata)
1144 data._start()
1145 if self._dopreload:
-> 1146 data.preload()
1147
1148 for stratcls, sargs, skwargs in iterstrat:
/home/rory/anaconda2/envs/backtrader/lib/python2.7/site-packages/backtrader/feed.pyc in preload(self)
687
688 # preloaded - no need to keep the object around - breaks multip in 3.x
--> 689 self.f.close()
690 self.f = None
691
AttributeError: 'NoneType' object has no attribute 'close'
I was hoping to simply get a printout of the indicator.
I wonder if anyone could shed some light on what I have done wrong.
Thanks
RM
Hi @rorymack
I ran your script and got the same error. Taking out your indicator from the strategy
__init__still returned the same error.
After a bit of tinkering I found a few issues:
The data feed is causing the first error you are seeing
AttributeError: 'NoneType' object has no attribute 'close'
I replaced your data feed with my own and got passed this.
You do not need to load your indicator with
bt.indicator.channel20as it is in your script. You can initialize it withself.channel = channel20()
You have not imported the
mathmodule. However if you do, you will get the following error
module 'math' has no attribute 'max'
If I understand you correctly, you want the indicator to display the highest high of the last 20 candles and the lowest low of the last 20 candles. This is no compatible with a signal strategy. For signal strategies, the lines need to alternate between 1 and -1 values for long and short signals.
you are missing a few
selfdeclarations.
Some other thoughts and a solution:
You probably want to add a self.addminperiod(self.p.period)line to your indicators__init__so that it doesn't try and do anything until it has 20 candles worth of data.
Here is a working version of what I think you are looking for. Replace the data with your own. (Or investigate why the Yahoo call isn't working. I don't use Yahoo data so have not looked into it)
from datetime import datetime
import backtrader as bt
class channel20(bt.Indicator):
lines = ('maxi','mini',)
params = (('period', 20),)
def __init__(self):
self.addminperiod(self.p.period)
def next(self):
highs = self.data.high.get(size=self.p.period)
lows = self.data.low.get(size=self.p.period)
self.lines.maxi[0] = max(highs)
self.lines.mini[0] = min(lows)
class test(bt.Strategy):
def __init__(self):
self.channel = channel20()
cerebro = bt.Cerebro()
cerebro.addstrategy(test)
fromdate = datetime(2012,1,1)
todate = datetime(2012,1,5)
datapath = '../data/csv/commodities/XAUUSD/XAUUSD_m1_Ask_2012-2016.csv'
data0 = bt.feeds.GenericCSVData(
timeframe=bt.TimeFrame.Minutes,
compression=1,
dataname=datapath,
nullvalue=0.0,
dtformat=('%m/%d/%Y'),
tmformat=('%H:%M:%S'),
fromdate=fromdate,
todate=todate,
datetime=0,
time=1,
high=3,
low=4,
open=2,
close=5,
volume=6,
openinterest=-1 #-1 means not used
)
cerebro.adddata(data0)
cerebro.run()
cerebro.plot()
This resulted in the following
Hope this helps!
Thank you for your reply. I am blundering my way around here and appreciate you taking time to help me out.
Interesting that is was the data feed, I hacked this attempt from the SMACrossover example.
Looking at the DataFeeds reference https://www.backtrader.com/docu/dataautoref.html
It appears that it has all the lines needed.
Is it generally better to create csv's of the data ?
I notice quickstart tutorial does that.
I ran it again with your modifications and the yahoo data feed and it worked! Not sure if it is a version thing.
AhHa ! Thank you my python fu is not what it should be...
Ok, again poor Python skills on my side.
Not sure if I follow you here, I had in mind that I would set up the indicator and then set up the signal in
def next(self):
with this logic :
If previous period channel max is < current period high buy
and
If previous period channel low is > current period low sell
Which would generate the long and short signals.
I had it in my head that the indicator is separate from the signal, Is that conceptually correct?
Again thanks bad python on my part.
The solution
Yes I do !
Thank you for the solution, I will play around some more with it to get to grips with the platform.
As an aside, how do you get the indicator to plot as an overlay on the data? As opposed to in a separate plot.
Thanks again.
RM
Interesting that is was the data feed, I hacked this attempt from the SMACrossover example.
def __init__(self):
self.l.maxi= math.max(self.data.high(p.period))
self.l.mini= math.min(self.data.low(p.period))
Unfortunately
math.max(actuallymax) doesn't generate alineslazily evaluated object andself.data.high(period)gives you a delayedlineslazily evaluated object, but not an array, which is the expectation ofmax.
The platform includes
Highest(akaMaxN) andLowest(akaMinN) indicators for that.
self.lines.maxi = bt.ind.MaxN(self.data, period=self.p.period)
This is also wrong:
channel = bt.indicator.channel20
because there is
noinstantiation. You are simply assigning a class (and not an instance of it) tochannel
As an aside, how do you get the indicator to plot as an overlay on the data? As opposed to in a separate plot.
If you want the indicator to appear on the price chart. (I agree, this indicator is a good one to appear with price). Then add
subplot=Falsewhen initializing the indicator. This is a Backtrader keyword argument for Indicators.
self.channel = channel20(subplot=False)
It will result in something that looks like this:
I ran it again with your modifications and the yahoo data feed and it worked! Not sure if it is a version thing.
It also could have been a Yahoo server issue... It was the weekend.
with this logic :
If previous period channel max is < current period high buy
and
If previous period channel low is > current period low sell
Which would generate the long and short signals.
Ok - If I follow this correctly, the script I gave you would need a tweak because the current candles high / low would never be above the min/max for the period as the current candle is included in the calculation. You can see this in the above example image.
To do this you can just pop the most recent result out of the list and you can check if the current candle is above or below the period max/min. Note that the period you are now comparing against is 19 candles instead of 20.
Maxim Korobovlast edited by
Vote to add indicator into main package! It's great for
Forexmarket. |
2020/01/16
regression using one input (x)
기존에 수행하던 방식은 위와 같은 단일 변수에 의한 데이터에 기반하고 있다.
regression using three inputs (x1, x2, x3)
이처럼 여러 개의 변수를 다뤄야 하는 경우 Hypothesis는 어떻게 구할까?
위처럼 간단히 생각해서 각각에 대한 W와 x의 값을 곱해서 모두 더해주면 된다.그렇지만 기존의 방식과 달리 학습시켜야 할 내용이 늘어난 것이다.
그렇다면 Cost function의 경우는 어떨까?
Cost function의 틀은 결국 같으나 바뀐 것은 우리의 가설, 즉 Hypothesis이다.
변수가 두세개가 아닌 훨씬 더 많을 때에도 마찬가지이다.
그저 마찬가지로 변수의 개수를 늘려주면 되는데, 변수가 많으면 많아질수록항의 개수에 따라 수식을 길게 늘어뜨려 써야 해서 불편함이 발생하게 된다.이것을 처리하기 위해서 Matrix의 개념을 도입하게 되는데, Matrix의 곱셈만을 사용할 것이다.
Matrix를 사용하게 되면 길게 늘어진 수식을 위와 같이 간단하게 표현하고 연산할 수 있다.X에 대한 집합을 1*3 Matrix, W에 대한 집합을 3*1 Matrix로 두고곱셈을 수행하면 우리가 원하던 수식을 연산할 수 있으며 하단과 같이 Matrix간의곱셈 식으로 우리의 Hypothesis를 표현할 수 있다. (Matrix에서는 보통 X를 앞에 둔다.)
이 개념을 도입해서 위의 표에서 살펴본 x변수가 세 개인 성적 예측 문제를 다뤄볼 수 있다.그런데 위의 표에서 확인한 것처럼 x1, x2, x3로 이루어진 한 묶음의 데이터 셋이하나가 아니고 여러 줄로 이루어진 것을 알 수 있는데,이 한 줄의 묶음을 instance라고 부르며, 이 instance들이 지금처럼 많을 때에는물론 이를 반복적으로 수행하는 것도 방법이겠지만, 효율성 면에서 좋지 않다고 판단할 수 있다.이 상황에서 Matrix의 굉장히 놀라운 장점은, x 변수들로 이루어진 각 인스턴스들에대하여 표의 모양 그대로 하나의 Matrix를 줄 수 있다는 점이다.
다시말해, Hypothesis를 구성할 때, 아래처럼 x 변수에 대한 인스턴스들을전부 하나의 Matrix로 만들고 하나의 W와 곱셈을 수행하기만 하면 되는 것이다.
그러면 이러한 형태의 Matrix 연산을 많이 하게 될 텐데x에 대한 Variable의 개수와 Instance의 개수는 이미 주어진 상태이므로Matrix X가 주어진 상태라고 볼 수 있게 된다.또한 곱셈의 결과인 H는 행의 개수가 instance의 개수이고y값은 하나이므로 이 또한 마찬가지로 주어진다고 볼 수 있다.대부분의 경우 이러한 상황에서 W에 대한 매트릭스의 크기를 결정하는 것이가설 설정의 일부분이라고 할 수 있다.
이는결국 행렬 곱셈의 원리를 알고 있다면 충분히 직관적으로 떠올릴 수 있는데W는 곱셈의 우측에 해당하므로 행의 크기로 X의 Variable 개수,그리고 열의 크기로는 결과에 해당하는 y의 개수이므로 1이 됨을 알 수 있다.출력의 개수가 한 개가 아니더라도, 우리가 원하는 출력의 개수에 대해서는우리가 이미 알고 있다는 가정이 존재하므로 그에 따라서W의 열의 개수를 결정할 수 있다.
또한 위 예젱서는 우리가 instance의 개수를 5개로 두었지만, 이는데이터의 개수가 늘어남에 따라 가변적으로 달라질 수 있기 때문에 n으로 둔다.(Numpy에서는 -1, TensorFlow에서는 None이라는 값으로 표현)
이론 시간에 표와 함께 살펴본 것과 같이 x 변수가 세 개로 구성된성적 예측 문제를 실습할 것이다.
import tensorflow as tf tf.set_random_seed(777) # for reproducibility x1_data = [73., 93., 89., 96., 73.] x2_data = [80., 88., 91., 98., 66.] x3_data = [75., 93., 90., 100., 70.] y_data = [152., 185., 180., 196., 142.] # placeholders for a tensor that will be always fed. x1 = tf.placeholder(tf.float32) x2 = tf.placeholder(tf.float32) x3 = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) w1 = tf.Variable(tf.random_normal([1]), name='weight1') w2 = tf.Variable(tf.random_normal([1]), name='weight2') w3 = tf.Variable(tf.random_normal([1]), name='weight3') b = tf.Variable(tf.random_normal([1]), name='bias') hypothesis = x1 * w1 + x2 * w2 + x3 * w3 + b # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize. Need a very small learning rate for this data set optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(2001): cost_val, hy_val, _ = sess.run([cost, hypothesis, train], feed_dict={x1: x1_data, x2: x2_data, x3: x3_data, Y: y_data}) if step % 10 == 0: print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)
이론 시간에 살펴봤던 표를 바탕으로 코드를 위와 같이 작성할 수 있다.이전과 크게 다를 것이 없으나, x_data와 placeholder, weight를정의하는 부분이 개수가 늘어남에 따라 확장되었다는 것이 차이점이 되겠다.
결과는 아래와 같다.
0 Cost: 19614.8Prediction:[ 21.69748688 39.10213089 31.82624626 35.14236832 32.55316544]10 Cost: 14.0682Prediction:[ 145.56100464 187.94958496 178.50236511 194.86721802 146.08096313]...1990 Cost: 4.9197Prediction:[ 148.15084839 186.88632202 179.6293335 195.81796265 144.46044922]2000 Cost: 4.89449Prediction:[ 148.15931702 186.8805542 179.63194275 195.81971741 144.45298767]
이론 시간에 배웠던 내용을 떠올려 보면, 위와 같은 코드는 아름답지 않다는 알 수 있을 것이다.만약 x_data가 지금은 3개이지만, 100개가 된다면? 매우 복잡해질 것이므로이러한 방법은 권장되지 않고 사용되지 않는다. 따라서 Matrix를 사용한다.
import tensorflow as tf tf.set_random_seed(777) # for reproducibility x_data = [[73., 80., 75.], [93., 88., 93.], [89., 91., 90.], [96., 98., 100.], [73., 66., 70.]] y_data = [[152.], [185.], [180.], [196.], [142.]] # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 3]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([3, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis hypothesis = tf.matmul(X, W) + b # matrix multiplication. # Simplified cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(2001): cost_val, hy_val, _ = sess.run( [cost, hypothesis, train], feed_dict={X: x_data, Y: y_data}) if step % 10 == 0: print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)
Matrix를 사용하게 되면 data를 표현하는 부분에 행렬식으로 표현을 해야해서복잡해 보이는 것을 제외하면 나머지 부분들은 매우 간소화된 것을 확인할 수 있다.몇 가지 짚고 넘어가면, X와 Y의 placeholder를 정의하는 부분에서shape을 지정할 때, 변수 x에 대한 인스턴스의 개수는 5개로 위에서 정의했지만몇 개가 되든 표현할 수 있게 하기 위해서 None이라는 필드를 통해n개의 인스턴스를 표현할 수 있다.
또한 hypothesis를 지정하는 부분에서 보듯이, 행렬의 곱셈을TensorFlow 함수인 matmul을 활용할 수 있음을 알 수 있다.(matrix multiplication)
실행 결과는 기존과 동일하다.
데이터가 점점 많아질수록, 데이터를 일일히 코드에 직접 적어서 사용하는 것이불편하고 천개, 만개가 된다면 더더욱 불가능한 일이 될 것이다.그래서 text file에 미리 데이터를 정의해두고 사용하는 방식을 채택하는데,주로 많이들 사용하는 형식이 ,csv라는 확장자이다.코드를 살펴보기 전에, Python에서 제공하는 list의 강력한 기능 중 하나인Slicing에 대해서 살펴본다.
사실 필자도 이 강의를 보면서 처음 접한 내용이기도 한데살펴보면 이러한 내용인 것 같다.0,1,2,3,4 를 원소로 갖는 리스트가 있는데 nums[2:4]라고 작성하게 되면,index 2번에서부터 4번째 위치한 원소까지를 리스트로 반환한다.(개인적으로 왜 이렇게 하는지 모르겠다.)또한 nums[:]와 같이 작성하게 되면 리스트 전체를 반환하고,nums[:-1]과 같이 작성하면 마지막 원소를 제외하고 반환한다고 한다.
또한 Numpy에서는 더 강력한 slicing과 indexing을 제공한다고 하는데, 이는 기본 기능과 유사하며 사진을 참고하고 나중에 더 자세히 살펴보면 될 것 같다.
# EXAM1,EXAM2,EXAM3,FINAL73,80,75,15293,88,93,18589,91,90,18096,98,100,19673,66,70,14253,46,55,101
위와 같은 데이터를 .csv 확장자를 활용하여 파일 형태로 정의한 뒤,아래와 같은 코드를 통해 다루는 실습을 해보자.
import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility xy = np.loadtxt('data-01-test-score.csv', delimiter=',', dtype=np.float32) x_data = xy[:, 0:-1] # 전체 n개의 행과 마지막을 제외한 열 전체를 취하겠다는 의미. y_data = xy[:, [-1]] # 전체 n개의 행과 마지막 열에 해당하는 리스트를 반환. # Make sure the shape and data are OK --> 학습시키기 전 가져온 데이터가 맞는지 확인. print(x_data, "\nx_data shape:", x_data.shape) print(y_data, "\ny_data shape:", y_data.shape) # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 3]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([3, 1]), name='weight') # W의 shape은 X와 Y의 shape의 조합. b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis hypothesis = tf.matmul(X, W) + b # Simplified cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(2001): cost_val, hy_val, _ = sess.run([cost, hypothesis, train], feed_dict={X: x_data, Y: y_data}) if step % 10 == 0: print(step, "Cost:", cost_val, "\nPrediction:\n", hy_val)
위와 같은 코드를 통해 학습을 시킨 후, 아래와 같이 실행하여 학습 여부를 확인해볼 수 있다.
# Ask my score print("Your score will be ", sess.run(hypothesis, feed_dict={X: [[100, 70, 101]]})) print("Other scores will be ", sess.run(hypothesis, feed_dict={X: [[60, 70, 110], [90, 100, 80]]}))
Your score will be [[ 181.73277283]]Other scores will be [[ 145.86265564] [ 187.23129272]]
이처럼 Numpy를 활용하여 파일을 통해 데이터를 가져오고, 그것을 처리하는 실습을 진행했다.그런데 File이 크거나 많아서 메모리에 한번에 올릴 수 없는 경우가 있을 수도 있다.그래서 TensorFlow에서는 Queue Runners라는 것이 존재하는데,이러한 문제를 TensorFlow 내에서 알아서 처리해주도록 만들어준 프로세스라고 한다.그 동작 과정이 대략 아래와 같다.
decode_csv)
이러한 과정들을 거친 뒤 TensorFlow에서 제공하는 batch라는 함수를 통해서분류하고 한 번에 묶어주는 작업을 진행한다. (batch의 사전적 의미가 “일괄”이다.)(교수님은 이 batch를 펌프에 비유하심.)
Queue Runners 방식으로 학습 데이터를 가져오는 전체 실습 코드는 아래와 같다.
import tensorflow as tf tf.set_random_seed(777) # for reproducibility filename_queue = tf.train.string_input_producer( ['data-01-test-score.csv'], shuffle=False, name='filename_queue') reader = tf.TextLineReader() key, value = reader.read(filename_queue) # Default values, in case of empty columns. Also specifies the type of the # decoded result. record_defaults = [[0.], [0.], [0.], [0.]] xy = tf.decode_csv(value, record_defaults=record_defaults) # collect batches of csv in train_x_batch, train_y_batch = \ tf.train.batch([xy[0:-1], xy[-1:]], batch_size=10) # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 3]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([3, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis hypothesis = tf.matmul(X, W) + b # Simplified cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) for step in range(2001): x_batch, y_batch = sess.run([train_x_batch, train_y_batch]) cost_val, hy_val, _ = sess.run( [cost, hypothesis, train], feed_dict={X: x_batch, Y: y_batch}) if step % 10 == 0: print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val) coord.request_stop() coord.join(threads) # Ask my score print("Your score will be ", sess.run(hypothesis, feed_dict={X: [[100, 70, 101]]})) print("Other scores will be ", sess.run(hypothesis, feed_dict={X: [[60, 70, 110], [90, 100, 80]]}))
결과는 Numpy로 데이터를 가져왔을 때와 동일하다. |
nutz版本:1.r.68.v20200427
字符串如下:
{
"access_token": "0f740c97-74fd-4f8a-a41a-bdb46d69d3d5",
"token_type": "bearer"
}
String content = "{\n" +
" \"access_token\": \"0f740c97-74fd-4f8a-a41a-bdb46d69d3d5\",\n" +
" \"token_type\": \"bearer\"\n" +
"}";
Map map = Json.fromJsonAsMap(NutMap.class, content);
System.out.println(map);
json String转map报错:
Exception in thread "main" org.nutz.castor.FailToCastObjectException: Fail to cast from <java.lang.String> to <org.nutz.lang.util.NutMap> for {0f740c97-74fd-4f8a-a41a-bdb46d69d3d5}
at org.nutz.castor.Castors.cast(Castors.java:263)
at org.nutz.castor.Castors.castTo(Castors.java:317)
at org.nutz.mapl.impl.convert.ObjConvertImpl.injectMap(ObjConvertImpl.java:156)
at org.nutz.mapl.impl.convert.ObjConvertImpl.inject(ObjConvertImpl.java:87)
at org.nutz.mapl.impl.convert.ObjConvertImpl.convert(ObjConvertImpl.java:72)
at org.nutz.mapl.Mapl.maplistToObj(Mapl.java:34)
at org.nutz.json.Json.parse(Json.java:94)
at org.nutz.json.Json.fromJson(Json.java:88)
at org.nutz.json.Json.fromJson(Json.java:110)
at org.nutz.json.Json.fromJsonAsMap(Json.java:421)
at JsonTest.main(JsonTest.java:40)
|
Ιανουαρίου 14, 2020 — Posted by Kangyi Zhang, Sandeep Gupta, and Brijesh Krishnaswami
TensorFlow.js is an open-source library that lets you define, train, and run machine learning models in Javascript. The library has empowered a new set of developers from the extensive JavaScript community to build and deploy machine learning models and has enabled new use cases of machine learning. For example TensorFlow.js runs in a…
const model = await tf.node.loadSavedModel(path, [tag], signatureKey);
const output = model.predict(input);
You can also feed multiple inputs to the model as an array or a map:
const model1 = await tf.node.loadSavedModel(path1, [tag], signatureKey);
const outputArray = model1.predict([inputTensor1, inputTensor2]);
const model2 = await tf.node.loadSavedModel(path2, [tag], signatureKey);
const outputMap = model2.predict({input1: inputTensor1, input2:inputTensor2});
const modelInfo = await tf.node.getMetaGraphsFromSavedModel(path);
This new feature is available in the @tensorflow/tfjs-node package version 1.3.2 and newer, for both CPU and GPU. It supports TensorFlow SavedModel trained and exported in both TensorFlow Python versions 1.x and 2.0. Besides the benefit of not needing any conversion, native execution of TensorFlow SavedModel means that you can run models with ops that are not in TensorFlow.js yet, through loading the SavedModel as a TensorFlow session in the C++ bindings.
Ιανουαρίου 14, 2020 — Posted by Kangyi Zhang, Sandeep Gupta, and Brijesh Krishnaswami
TensorFlow.js is an open-source library that lets you define, train, and run machine learning models in Javascript. The library has empowered a new set of developers from the extensive JavaScript community to build and deploy machine learning models and has enabled new use cases of machine learning. For example TensorFlow.js runs in a… |
I have programmed for 2 months, and I began writing a Chess game. I am a beginner programmer in Python, so please assess my code.
class Chess_Board:
def __init__(self):
self.board = self.create_board()
def create_board(self):
board_x=[]
for x in range(8):
board_y =[]
for y in range(8):
board_y.append('.')
board_x.append(board_y)
board_x[7][4] = 'K'
board_x[7][3] = 'Q'
board_x[7][2] = 'B'
board_x[7][1] = 'N'
board_x[7][0] = 'R'
return board_x
class WHITE_KING(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WK = 7
self.position_y_WK = 4
self.symbol_WK = 'K'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE KING')
destination_x_WK = int(input())
destination_y_WK = int(input())
if self.board[destination_x_WK][destination_y_WK] == '.' :
if ( abs(self.position_x_WK-destination_x_WK) <2 and abs(self.position_y_WK-destination_y_WK) < 2 ):
self.board[self.position_x_WK][self.position_y_WK] = '.'
self.position_x_WK = destination_x_WK
self.position_y_WK = destination_y_WK
self.board[self.position_x_WK][self.position_y_WK] = self.symbol_WK
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_QUEEN(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WQ = 7
self.position_y_WQ = 3
self.symbol_WQ = 'Q'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE QUEEN')
destination_x_WQ = int(input())
destination_y_WQ = int(input())
if self.board[destination_x_WQ][destination_y_WQ] == '.' :
if (destination_x_WQ == self.position_x_WQ or destination_y_WQ==self.position_y_WQ or abs(self.position_x_WQ-destination_x_WQ) == abs(self.position_y_WQ-destination_y_WQ) ):
self.board[self.position_x_WQ][self.position_y_WQ] = '.'
self.position_x_WQ = destination_x_WQ
self.position_y_WQ = destination_y_WQ
self.board[self.position_x_WQ][self.position_y_WQ] = self.symbol_WQ
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_ROOK(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WR = 7
self.position_y_WR = 0
self.symbol_WR = 'R'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE ROOK ')
destination_x_WR = int(input())
destination_y_WR = int(input())
if self.board[destination_x_WR][destination_y_WR] == '.' :
if (destination_x_WR == self.position_x_WR or destination_y_WR==self.position_y_WR ):
self.board[self.position_x_WR][self.position_y_WR] = '.'
self.position_x_WR = destination_x_WR
self.position_y_WR = destination_y_WR
self.board[self.position_x_WR][self.position_y_WR] = self.symbol_WR
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_BISHOP(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WB = 7
self.position_y_WB = 2
self.symbol_WB = 'B'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE BISHOP')
destination_x_WB = int(input())
destination_y_WB = int(input())
if self.board[destination_x_WB][destination_y_WB] == '.' :
if abs(self.position_x_WB-destination_x_WB) == abs(self.position_y_WB-destination_y_WB) :
self.board[self.position_x_WB][self.position_y_WB] = '.'
self.position_x_WB = destination_x_WB
self.position_y_WB = destination_y_WB
self.board[self.position_x_WB][self.position_y_WB] = self.symbol_WB
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_KNIGHT(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WKN = 7
self.position_y_WKN = 1
self.symbol_WKN = 'N'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE KNIGHT')
destination_x_WKN = int(input())
destination_y_WKN = int(input())
if self.board[destination_x_WKN][destination_y_WKN] == '.' :
if abs(self.position_x_WKN-destination_x_WKN)**2 + abs(self.position_y_WKN-destination_y_WKN)**2 == 5 :
self.board[self.position_x_WKN][self.position_y_WKN] = '.'
self.position_x_WKN = destination_x_WKN
self.position_y_WKN = destination_y_WKN
self.board[self.position_x_WKN][self.position_y_WKN] = self.symbol_WKN
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class Engine(Chess_Board):
def __init__(self):
WHITE_KING.__init__(self)
WHITE_QUEEN.__init__(self)
WHITE_ROOK.__init__(self)
WHITE_BISHOP.__init__(self)
WHITE_KNIGHT.__init__(self)
Chess_Board.__init__(self)
def play(self):
print('Please write what figure you choose to move: white_king, white_queen, white_rook, white_bishop'
'or white knight')
while True:
choice=str(input())
if choice == 'white_king':
WHITE_KING.move(self)
break
elif choice == 'white_queen':
WHITE_QUEEN.move(self)
break
elif choice == 'white_bishop':
WHITE_BISHOP.move(self)
break
elif choice == 'white_knight':
WHITE_KNIGHT.move(self)
break
elif choice == 'white_rook':
WHITE_ROOK.move(self)
break
else:
print ('please choose again')
def display(self):
for i in range (8):
for j in range (8):
print (self.board[i][j], end=' ')
print ()
c_engine = Engine()
c_engine.display()
c_engine.play()
c_engine.display()
|
基数排序,平均时间复杂度O(n):
基数排序依赖于稳定排序,对每个字段按优先级从低到高进行稳定排序,规模为n的数据,d个字段,每个字段可能取值为0~k则时间耗费为Θ(d(n+k))
基于计数排序的十进制整数基数排序,可以看出计数排序是稳定的,python实现:
import random
import math
A_range = 100
A = []
for i in range(A_range):
A.append(random.randint(0,A_range) )
#must make sure the count of columns (named d)
k = 10 #for decimal base
def radix_sort(A,d):
A_sub=[0 for x in range(len(A))]
for i in range(d):
for j in range(len(A)):
A_sub[j] = A[j]/(k**i) % k
A= counting_sort(A,A_sub,k)
return A
def counting_sort(A,A_sub,k):
B=[0 for x in range(len(A))]
C=[0 for x in range(k)]
for i in range(len(A)):
C[A_sub[i]] = C[A_sub[i]]+1
for j in range(1,k):
C[j] = C[j] +C[j-1]
for l in range((len(A)-1),-1,-1):
B[C[A_sub[l]]-1] = A[l]
C[A_sub[l]] = C[A_sub[l]] -1
return B
A = radix_sort(A,int(math.log(A_range,k))+1)
print A
|
当使用 Python2.7 +go-pylons.py 创建 Pylons 运行环境的话,会报一下错误。
[alswl@arch-vm xingtong]$ python go-pylons.py myb_env
New python executable in myb_env/bin/python
Traceback (most recent call last):
File "/home/alswl/work/xingtong/myb_env/lib/python2.7/site.py", line 67, in <module>
import os
File "/home/alswl/work/xingtong/myb_env/lib/python2.7/os.py", line 398, in <module>
import UserDict
File "/home/alswl/work/xingtong/myb_env/lib/python2.7/UserDict.py", line 83, in <module>
import _abcoll
File "/home/alswl/work/xingtong/myb_env/lib/python2.7/_abcoll.py", line 11, in <module>
from abc import ABCMeta, abstractmethod
File "/home/alswl/work/xingtong/myb_env/lib/python2.7/abc.py", line 8, in <module>
from _weakrefset import WeakSet
ImportError: No module named _weakrefset
ERROR: The executable myb_env/bin/python is not functioning
ERROR: It thinks sys.prefix is '/home/alswl/work/xingtong' (should be '/home/alswl/work/xingtong/myb_env')
ERROR: virtualenv is not compatible with this system or executable
#!diff
--- a/virtualenv.py 2010-09-14 21:48:58.078562930 +0200
+++ b/virtualenv.py 2010-09-14 21:46:20.650769346 +0200
@@ -51,6 +51,8 @@ REQUIRED_FILES = ['lib-dynload', 'config
if sys.version_info[:2] >= (2, 6):
REQUIRED_MODULES.extend([‘warnings’, ‘linecache’, ’_abcoll’, ‘abc’])
if sys.version_info[:2] >= (2, 7):
REQUIRED_MODULES.extend([’_weakrefset’])
if sys.version_info[:2] <= (2, 3):
REQUIRED_MODULES.extend([‘sets’, ‘future’])
if is_pypy:
懒人可以点击go-pylons.py下载。 |
I wrote this blog post when working at Sqreen, a startup that develops Software-as-a-service (SaaS) solutions to protect web applications from cyber attacks. This post summarizes the streaming technology used to analyse the attacks in real time.
Introduction
At Sqreen we use Amazon Kinesis service to process data from our agents in near real-time. This kind of processing became recently popular with the appearance of general use platforms that support it (such as Apache Kafka). Since these platforms deal with the stream of data, such processing is commonly called the “stream processing”. It’s a departure from the old model of analytics that ran the analysis in batches (hence its name “batch processing”) rather than online. The main differences between these two approaches are:
stream processing deals with data that are punctual in time, i.e. with events that are generated at specific points in time, whereas batch processing is applied to data batches representing larger slices of time (for example, data stored in databases),
stream processing analyses data online, i.e. most often almost immediately after it arrives, whereas batch processing waits for the data collection to be finished (the moment can be defined arbitrarily, for example, at the end of the day) to analyze it off-line,
data analysed by stream processing is unbounded, i.e. it does not have the specific end, whereas the batches are bounded, i.e. they have a well-defined window.
Streams as distributed logs
Platforms such as Apache Kafka provide streams that receive data from event sources (producers) and pass them down to consumers, which in turn can forward them to other streams. In essence, they are similar to message queues, but they support multiple consumers that process the same messages in parallel (like in publish-subscribe messaging model) and store the old messages even after they were delivered to the consumers. They are a kind of append-only event logs (Figure 1). Logs are most commonly associated with the flat files sitting in the /var/log directory and meant to be read by a human. Streams are different: they are logs optimized for storing/provisioning binary data (that could be text but also fragments of images, sensor readings, etc.). This log-like design of streams allows new consumers to be added or removed without any impact on the remaining consumers at any point. Consumers can also start reading from the stream at any offset (any message in the past).
Figure 1 A sketch of a stream. New events are appended at the left of the stream-log and are consumed by the consumers from right to left starting with any offset.
When events arrive at high frequency, a single machine may not keep up with processing them. In this case, both streams and their consumers can be distributed by partitioning the source events (Figure 2)/. Such a partition is done on a key that will simply be part of the logged messages.
Figure 2 Events emitted from the source (producer) are forwarded to the stream. In this case, the stream is distributed into two shards: an event is sent only to a single shard depending on the partition key that is part of the message (here the IP address). Messages from each shard are handled independently by different consumers.
Streaming applications
Streams have found applications in many problems. They are commonly used for real-time data analytics (such as streams of twits), for replicating databases (both for performance and reliability reasons), for real-time monitoring and detection of special events (such as fraud detection) and for building data-intensive systems that require different representations of the same data (for example, databases for operations, indexes for fast queries, and data warehouses for running batch analyses).
Amazon Kinesis Data Streams (which we will call simply Kinesis) is a managed service that provides a streaming platform. It includes solutions for stream storage and an API to implement producers and consumers. Amazon charges per hour of each stream work partition (called shards in Kinesis) and per volume of data flowing through the stream.
Goal
The goal of this tutorial is to familiarize you with the stream processing with Amazon Kinesis. In particular, we will implement a simple producer-stream-consumer pipeline that counts the number of requests in consecutive, one-minute-long time windows. We will apply this pipeline to simulated data, but it could be easily extended to work with real websites. This is precisely one of the applications that we use Kinesis for at Sqreen (more about it below).
We will demonstrate stream processing using the Jupyter notebook. You can download the notebook from here and execute it on your computer (for instructions, see Jupyter documentation). Alternatively, you can copy-paste the code examples directly to your Python interpreter.
Requirements
To install dependencies, run the following commands at the command line (i.e. in the shell).
$ pip install aws
Configure AWS credentials
To connect to AWS, you must first create your credentials (you will get them from the AWS Console). Then, simply configure them using the following command:
$ aws configure --profile blogpost-kinesis
blogpost-kinesis is the name of the profile you will use for this tutorial. When requested you will need to copy-paste your acess key id and secret obtained from AWS Management Console. For instructions, check the relevant section of AWS User Guide.
Creating a stream
Let’s create our first stream. You can either do it using the AWS Console or the API. We will use the second approach. First, we need to define the name of the stream, the region in which we will create it, and the profile to use for our AWS credentials (you can aws_profile to None if you use the default profile).
stream_name = 'blogpost-word-stream'
region = 'eu-west-1'
aws_profile = 'blogpost-kinesis'
Now we can use boto library to create the stream and wait until it becomes active.
import boto
from boto.kinesis.exceptions import ResourceInUseException
import os
import time
if aws_profile:
os.environ['AWS_PROFILE'] = aws_profile
# connect to the kinesis
kinesis = boto.kinesis.connect_to_region(region)
def get_status():
r = kinesis.describe_stream(stream_name)
description = r.get('StreamDescription')
status = description.get('StreamStatus')
return status
def create_stream(stream_name):
try:
# create the stream
kinesis.create_stream(stream_name, 1)
print('stream {} created in region {}'.format(stream_name, region))
except ResourceInUseException:
print('stream {} already exists in region {}'.format(stream_name, region))
# wait for the stream to become active
while get_status() != 'ACTIVE':
time.sleep(1)
print('stream {} is active'.format(stream_name))
Running the code, generates the following output:
create_stream(stream_name)
stream blogpost-word-stream created in region eu-west-1
stream blogpost-word-stream is active
Putting data into streams
To have an operational stream processing chain, we need a source of messages (a producer in AWS terminology) and a receiver (consumer) that will obtain and process the messages. We will first define the producer.
import datetime
import time
import threading
from boto.kinesis.exceptions import ResourceNotFoundException
class KinesisProducer(threading.Thread):
"""Producer class for AWS Kinesis streams
This class will emit records with the IP addresses as partition key and
the emission timestamps as data"""
def __init__(self, stream_name, sleep_interval=None, ip_addr='8.8.8.8'):
self.stream_name = stream_name
self.sleep_interval = sleep_interval
self.ip_addr = ip_addr
super().__init__()
def put_record(self):
"""put a single record to the stream"""
timestamp = datetime.datetime.utcnow()
part_key = self.ip_addr
data = timestamp.isoformat()
kinesis.put_record(self.stream_name, data, part_key)
def run_continously(self):
"""put a record at regular intervals"""
while True:
self.put_record()
time.sleep(self.sleep_interval)
def run(self):
"""run the producer"""
try:
if self.sleep_interval:
self.run_continously()
else:
self.put_record()
except ResourceNotFoundException:
print('stream {} not found. Exiting'.format(self.stream_name))
Note that for the partition key we used the IP address and for the data the timestamps. In Kinesis, you are almost completely free to choose whatever you want for the data, as long as it can be serialised in binary format and it’s less than 50 KB of size. If you need to emit larger data, you can split it into several messages. The partition key must be a string shorter than 256 characters, it will be used to determine which shard to send the data to (Figure 2). All data that should be processed together must use the same partition key, otherwise it may be forwarded to another shard.
Note that we implemented the KinesisProducer as a Python thread, such that it can run in the background and won’t block the Python interpreter. This way we can continue executing Python instructions.
Now we create two producers with different IP addresses and different intervals between consecutive messages.
producer1 = KinesisProducer(stream_name, sleep_interval=2, ip_addr='8.8.8.8')
producer2 = KinesisProducer(stream_name, sleep_interval=5, ip_addr='8.8.8.9')
producer1.start()
producer2.start()
Sqreen’s Security Automation feature allows one to monitor traffic on a website and set conditions under which a given client should be blocked (such as, trying to read the same page too many times). To implement this feature, we are running similar event sources that inform the stream about the IP addresses from which the requests are emitted together with the timestamp of the request (Figure 3).
Consuming from a stream
Consumers receive the messages from the stream and process them. Their output could be messages forwarded to another stream, file saved on the filesystem (or Amazon S3 storage) or records stored in a database. Consumers can also keep local state. This makes them uniquely suited to work on a stream of similar data and quickly calculate a value from them.
Defining a consumer
First, let’s define a generic consumer, which will consist of a run method polling for new events from the Kinesis stream and process_method that will process the event data and produce any of the side effects (i.e. forwarding the results to another stream or committing them to a database). The process_method will not be implemented in this generic base class, and it will need to be implemented in the sub-classes (see below).
from boto.kinesis.exceptions import ProvisionedThroughputExceededException
import datetime
class KinesisConsumer:
"""Generic Consumer for Amazon Kinesis Streams"""
def __init__(self, stream_name, shard_id, iterator_type,
worker_time=30, sleep_interval=0.5):
self.stream_name = stream_name
self.shard_id = str(shard_id)
self.iterator_type = iterator_type
self.worker_time = worker_time
self.sleep_interval = sleep_interval
def process_records(self, records):
"""the main logic of the Consumer that needs to be implemented"""
raise NotImplementedError
@staticmethod
def iter_records(records):
for record in records:
part_key = record['PartitionKey']
data = record['Data']
yield part_key, data
def run(self):
"""poll stream for new records and pass them to process_records method"""
response = kinesis.get_shard_iterator(self.stream_name,
self.shard_id, self.iterator_type)
next_iterator = response['ShardIterator']
start = datetime.datetime.now()
finish = start + datetime.timedelta(seconds=self.worker_time)
while finish > datetime.datetime.now():
try:
response = kinesis.get_records(next_iterator, limit=25)
records = response['Records']
if records:
self.process_records(records)
next_iterator = response['NextShardIterator']
time.sleep(self.sleep_interval)
except ProvisionedThroughputExceededException as ptee:
time.sleep(1)
Implementing the processing logic
Note that each stream can have many consumers that receive all the messages and process them independently. Now, we will implement process_records method that will simply print the received messages to the standard output. We will do that by sub-classing the KinesisConsumer class.
class EchoConsumer(KinesisConsumer):
"""Consumers that echos received data to standard output"""
def process_records(self, records):
"""print the partion key and data of each incoming record"""
for part_key, data in self.iter_records(records):
print(part_key, ":", data)
We attach the consumer to our stream. To do that we need to pass the shard ID and the position in the stream to start processing the messages. For the latter, we can choose between the newest (LATEST) or the oldest (TRIM_HORIZON) record in the stream. Note that the default retention period for messages in Kinesis streams is 24 hours. It can be extended up to 168 hours at an additional cost.
The streams are partitioned into separate “sub-streams” (called shards) that receive messages from the same source. The target shard for each message is determined from the partition key. Each consumer can read from one or more shards, but there must be at least one consumer must be associated to every shard, otherwise some messages will be lost. Since, we only use one shard in this example, we can directly pass the default shard ID. If you need to configure more than one shard (to increase the throughput), you will need to query the stream for the IDs of all active shards using the API. For the sake of this tutorial, we will assume that we have only a single shard (this is clearly the case, since we created the stream with a single shard, see the call to kinesis.create_stream above).
shard_id = 'shardId-000000000000'
iterator_type = 'LATEST'
worker = EchoConsumer(stream_name, shard_id, iterator_type, worker_time=10)
Now, let run the consumer and observe the output:
worker.run()
8.8.8.8 : 2018-09-06T08:04:27.796125
8.8.8.8 : 2018-09-06T08:04:29.877330
8.8.8.9 : 2018-09-06T08:04:30.895562
8.8.8.8 : 2018-09-06T08:04:31.963790
8.8.8.8 : 2018-09-06T08:04:34.015333
As expected the consumer printed all received records with their partition keys (IP addresses) and data (timestamps).
Event aggregation
Finally, we can implement a consumer with some non-trivial logic. The goal of this consumer is to count the number of distinct requests from each particular IP in a specific time window (here one minute). Again, we will subclass the KinesisConsumer class and re-implement the process_records method. In addition, we will define one extra helper method print_counters that will simply dump the current counts to the standard output. In practice, we would forward the outputs of such processing to another stream for further analysis (filtering, detection of untypical events etc.) or store it in the DB. This is a part of what actually happens in Sqreen’s Security Automation pipeline (see below).
from collections import defaultdict, Counter
from dateutil import parser
from operator import itemgetter
class CounterConsumer(KinesisConsumer):
"""Consumer that counts IP occurences in 1-minute time buckets"""
def __init__(self, stream_name, shard_id, iterator_type, worker_time):
self.time_buckets = defaultdict(Counter)
sleep_interval = 20 # seconds
super().__init__(stream_name, shard_id, iterator_type, worker_time, sleep_interval)
def print_counters(self):
"""helper method to show counting results"""
now = datetime.datetime.utcnow()
print("##### Last run at {}".format(now))
for timestamp, ip_counts in self.time_buckets.items():
# sort counts with respect to the IP address
ip_counts = sorted(ip_counts.items(), key=itemgetter(0))
print(timestamp, ':', list(ip_counts))
def process_records(self, records):
for ip_addr, timestamp_str in self.iter_records(records):
timestamp = parser.parse(timestamp_str)
timestamp = timestamp.replace(second=0, microsecond=0)
self.time_buckets[timestamp][ip_addr] += 1
self.print_counters()
Let’s test the consumer:
worker = CounterConsumer(stream_name, shard_id, iterator_type, worker_time=120)
worker.run()
##### Last run at 2018-09-06 08:04:56.468067
2018-09-06 08:04:00 : [('8.8.8.8', 9), ('8.8.8.9', 4)]
##### Last run at 2018-09-06 08:05:16.563615
2018-09-06 08:04:00 : [('8.8.8.8', 11), ('8.8.8.9', 4)]
2018-09-06 08:05:00 : [('8.8.8.8', 8), ('8.8.8.9', 3)]
##### Last run at 2018-09-06 08:05:36.670241
2018-09-06 08:04:00 : [('8.8.8.8', 11), ('8.8.8.9', 4)]
2018-09-06 08:05:00 : [('8.8.8.8', 17), ('8.8.8.9', 7)]
##### Last run at 2018-09-06 08:05:56.775192
2018-09-06 08:04:00 : [('8.8.8.8', 11), ('8.8.8.9', 4)]
2018-09-06 08:05:00 : [('8.8.8.8', 27), ('8.8.8.9', 11)]
##### Last run at 2018-09-06 08:06:16.881760
2018-09-06 08:04:00 : [('8.8.8.8', 11), ('8.8.8.9', 4)]
2018-09-06 08:05:00 : [('8.8.8.8', 29), ('8.8.8.9', 12)]
2018-09-06 08:06:00 : [('8.8.8.8', 8), ('8.8.8.9', 3)]
All the lines prefixed by the hash signs ##### show the results of the counting process for a single run of the consumer. Since the consumer is executed each time new events arrive, the lines show updated state of the time_buckets cache. Each line starts with the timestamp denoting the beginning of the time bucket (it ends with the beginning of the next time bucket, i.e. the windows do not overlap), and the it’s followed by the list of IP address and count pairs. Every time the consumer runs the values are updated, such that the counts increase. If new requests arrive at the time that is not covered by any of the buckets, a new bucket is added and the count starts from zero for this bucket. The effect is roughly what we tried to achieve.
How is streaming used at Sqreen?
At Sqreen we intensively use Kinesis streams, especially in the feature called Security Automation. Security Automation is a real-time analytics framework that allows user to control traffic on their servers based on well-defined criteria (called playbooks).
Figure 3 A simplified sketch of Sqreen’s streaming pipeline.
Our pipeline consists of several streams and associated consumers (Figure 3). The events are produced by agents that sit in the web apps of Sqreen users. They contain the basic information about the connection (source IP etc.) and any extra details relevant to the business logic of user’s application. These events are then consumed by a consumer that filters the events and forwards them to the Log stream. The Detection consumer consumes from the Log stream, applies playbooks and detects anomalies (for example, too many requests from a single IP) and generates a response (for example, notify the owner of the webapp or block the IP). In parallel, the messages from Log stream are consumed by the Counter consumer that does the aggregation similar to the one demonstrated in this tutorial. These aggregated data are then stored in a database and exposed in the form of a graph. This approach, in which data is processed in parallel in different ways to obtain different views is typical for stream processing. Note that Detection and Counter consumers read from Log stream with a different offset and do not interfere with each other (for example, if one consumer crashes or has a significant backlog, the other consumer is not affected). At Sqreen this design allows us to have multiple actions associated with the messages coming from the user web apps (IP blocking, notifications, logging, monitoring etc.).
Conclusions
We demonstrated how to use Amazon Kinesis on a request counting example. Although the example was simplified, it contained the basic components of all stream processors — two producers, a stream (with a single shard) and one consumer. You can easily take this example and adapt it to your needs.
One important limitation of the present CounterConsumer is that it keeps in memory and print all counting windows at each run of the consumer. In real applications, we might save only the completed windows in the database and remove them from the time_buckets cache. This is not a trivial problem, because we can never be sure whether some events will arrive late, for example, due to some network delay or temporary network outage.
Another extension of CounterConsumer is to allow for an overlap between the windows. This overlap would provide some smoothing in the counts and make our pipeline more responsive, because the end user would not have to wait for the full window to be complete before seeing a new event being added to the counts.
Last, but not least we did not cover an important topic of spawning new consumers in the case when the existing consumer fails or we want to increase the number of shards. Similarly, we did not talk the checkpointing that allows for recovery of consumer state from the crash. These are non-trivial problems but they can be handled by the Amazon Kinesis Client Library (KCL), which is based on a Java-based orchestrator called MultiLangDaeamon. We will look into running stream consumer process with KCL in a follow-up blog post.
Cleaning up
We can delete the stream at the end of the exercise to minimize AWS costs (you will be charged for each stream-hour whether you use the created stream or not).
kinesis.delete_stream(stream_name)
stream blogpost-word-stream not found. Exiting
stream blogpost-word-stream not found. Exiting
The two messages are printed by the consumers that do not find the stream anymore and have to exit.
Further reading
[1] Jan Kreps, The Log: What every software engineer should know about real-time data’s unifying abstraction, 2013, blogpost
[2] Martin Kleppmann, Designing data-intensive applications, O’Reilly media, 2017
[3] Martin Kleppmann, Making Sense of Stream Processing, O’Reilly media, 2016, read online |
The difficult part was to figure out right config syntax, the only one worked below:
auth-user-pass-verify "C:/Python27/python.exe user-auth.py" via-env
The most surprising thing was:
OpenVPN cannot run python (or vbs) script without crouches!
user-auth.py
Code: Select all
#!/usr/bin/python
import os
import sys
import socket
import pyrad.packet
from pyrad.client import Client
from pyrad.dictionary import Dictionary
srv=Client(server="server_ip", secret="some_s3cret", dict=Dictionary("dictionary"))
req=srv.CreateAuthPacket(code=pyrad.packet.AccessRequest, User_Name=os.environ.get('username'))
req["User-Password"]=req.PwCrypt(os.environ.get('password'))
try:
reply=srv.SendPacket(req)
except pyrad.client.Timeout:
print "RADIUS server does not reply"
sys.exit(1)
except socket.error, error:
print "Network error: " + error[1]
sys.exit(1)
if reply.code==pyrad.packet.AccessAccept:
print "access accepted"
sys.exit(0)
else:
print "access denied"
sys.exit(1)
|
В браузере пока нельзя эмулировать PlayStation 2, но это лишь вопрос времени: Linux туда уже загружают, и он работает. Это кажется чудом, но никакого чуда нет: внутри такие эмуляторы очень просты.
Вступление
Эмуляция при помощи JavaScript стала возможна по двум причинам. Во-первых, тег Canvas. Скажи спасибо Apple: десять лет назад именно эта компания разработала и встроила в WebKit технологию, которая позволяет при помощи JavaScript манипулировать отдельными пикселями на HTML-странице. Сейчас Canvas поддерживают все распространенные браузеры.
Во-вторых, время. Большинство платформ, пригодных для эмуляции в браузере, появились в восьмидесятые годы прошлого века. Современные компьютеры на много порядков быстрее устройств, которые были в ходу четверть века назад. Относительно невысокой производительности JavaScript с лихвой хватает для того, чтобы имитировать работу старинного железа.
Исключение составляет JSLinux, нашумевший эмулятор ПК. Его в 2011 году разработал известный французский программист Фабрис Беллар — создатель FFmpeg, популярного средства кодирования и декодирования видео, которое используют VLC, MPlayer и YouTube, и универсального эмулятора аппаратного обеспечения QEMU. ПК и Linux — это вовсе не ретро, так ведь?
Почти. Это уже не восьмидесятые, но и не сегодняшний день. Эмулятор Беллара представляет собой модель тридцатидвухразрядного процессора, напоминающего Intel 80486, к которому подключены контроллер прерываний, таймер, блок управления памятью, интерфейс IDE для взаимодействия с виртуальным жестким диском и последовательный порт. Автор выкинул из процессора все, без чего можно обойтись, в том числе поддержку вычислений с плавающей точкой, а вместо графического дисплея использовал алфавитно-цифровой терминал — так проще.
Тем, кто желает разобраться, как устроен JSLinux, можно порекомендовать аннотированную версию исходников проекта, выложенную на GitHub. В отличие от оригинала, они снабжены подробными комментариями, которые поясняют, какой именно функциональности x86-совместимого процессора или других элементов ПК соответствует тот или иной фрагмент кода.
Как и в любом другом эмуляторе, центральную роль в JSLinux играет объект, моделирующий работу процессора. В данном случае соответствующий класс называется CPU_X86 и содержит переменные, в которых хранятся значения всех регистров и флагов, а также ссылки на «оперативную память» виртуального компьютера и методы для работы с ней. В принципе, ОЗУ можно было бы представить и в виде обычного целочисленного массива (так устроены многие другие эмуляторы), но Беллар нашел более эффективный вариант: он использовал типизированные массивы (их добавили в JavaScript относительно недавно для работы с бинарными данными в WebGL). Кроме процессора, в эмуляторе имеются отдельные объекты, имитирующие работу программируемого контроллера прерываний, последовательного порта и таймеров.
Взаимодействие с внешним миром происходит при помощи виртуального последовательного порта: на вход поступает информация о нажатиях клавиш, а на выход направляются последовательности символов, предназначенные для вывода на алфавитно-цифровой консоли. По сути дела, терминал играет роль шлюза между двумя программными интерфейсами: по одну сторону находится браузер с событиями DOM, по другую — Linux, который получает данные через последовательный порт.
При запуске эмулятор Беллара создает объект PCEmulator, содержащий объекты процессора и прочих компонентов компьютера, и выделяет 32 Мбайт памяти. Затем он инициализирует объекты устройств ввода-вывода и загружает в память образы ядра Linux и содержимого виртуальной «файловой системы» (они находятся в файлах vmlinux26.bin, root.bin и linuxstart.bin). После этого в регистр EIP (счетчик команд) помещается адрес, по которому оказалось содержимое vmlinux26.bin, в регистр EAX попадает величина виртуального ОЗУ в байтах, а в EBX — размер файла root.bin. Эмулятор готов к работе.
Цикл работы процессора описан в методе timer_func класса PCEmulator. Он, если опустить детали, заключается в последовательном вызове метода exec_internal класса CPU_X86, который загружает и исполняет отдельные команды машинного кода. Начало метода отмечено лаконичным комментарием: The Beast («Чудовище»). И не зря. Длина exec_internal без учета комментариев составляет порядка шести тысяч строк кода — это примерно 85% всего эмулятора. Метод идентифицирует команды, извлекает их аргументы и изменяет состояние процессора в соответствии с ними.
Копаться в коде эмулятора Беллара трудно не столько из-за его сложности, сколько из-за размеров системы команд x86. Одно лишь перечисление регистров отнимет целую страницу. Но тот же принцип использует большинство других эмуляторов, написанных на JavaScript. Возьмем, к примеру, JSNES — эмулятор восьмибитной игровой приставки NES, которую выпускала компания Nintendo (в России эту приставку знают под названием «Денди»).
NES и Sega
NES построена на базе восьмиразрядного процессора Ricoh 2A03, использующего систему команд MOS 6502 — популярного чипа, разработанного в середине семидесятых. Он очень прост не только по сравнению с Intel 80486, но и по сравнению с его современником — восьмиразрядным Intel 8080. MOS 6502 имеет всего два регистра общего назначения (X и Y), аккумулятор для математических операций и три специальных регистра: регистр P, отдельные биты которого служат флагами процессора, указатель стека SP и счетчик команд PC. Все они, за исключением шестнадцатиразрядного PC, имеют длину восемь бит (старший байт указателя стека считается всегда равным 0x01).
Эмулировать такой процессор гораздо проще, чем Intel 80486. Как и в JSLinux, в JSNES есть объект, описывающий состояние процессора:
JSNES.CPU = function() {
this.mem = null;
this.REG_ACC = null;
this.REG_X = null;
this.REG_Y = null;
this.REG_SP = null;
this.REG_PC = null;
// Пропустим долгое перечисление флагов процессора
this.reset();
};
В методе reset выделяется память — целочисленный массив, состоящий из 65 536 элементов. Именно таково максимальное число элементов, к которым можно обращаться при помощи шестнадцатиразрядных адресов:
this.mem = new Array(0x10000);
Затем эмулятор инициализирует регистры процессора. Хотя в действительности в указателе стека MOS 6502 умещается не больше одного байта, в эмуляторе он хранит полный адрес. Проще позаботиться о верной обработке переполнения значения регистра SP, чем составлять нужный адрес из нескольких частей всякий раз, когда нужно обратиться к стеку:
this.REG_ACC = 0;
this.REG_X = 0;
this.REG_Y = 0;
this.REG_SP = 0x01FF;
this.REG_PC = 0x8000-1;
В JSNES, разумеется, есть свой аналог белларовского exec_internal — метод, исполняющий отдельные команды машинного кода. Он называется emulate и хоть и не мал, но и не столь монструозен. После обработки прерываний emulate достает содержимое ячейки памяти, на которую указывает счетчик команд, сдвигает счетчик команд на один байт вперед и приготавливается считать потраченные на работу такты:
var opinf = this.opdata[this.nes.mmap.load(this.REG_PC+1)];
var cycleCount = (opinf>>24);
var cycleAdd = 0;
var opaddr = this.REG_PC;
this.REG_PC += ((opinf >> 16) & 0xFF);
var addrMode = (opinf >> 8) & 0xFF;
var addr = 0;
Теперь ему необходимо определить, есть ли у команды аргумент, и вычислить его. В зависимости от значения переменной addrMode аргумент может находиться по адресу, на который указывает счетчик команд, содержаться в аккумуляторе или вычисляться без малого дюжиной различных способов. Наконец, он может просто отсутствовать. Значение аргумента сохраняется в переменной addr.
Когда с этим покончено, приходит время выполнять команду. За это отвечает оператор switch(opinf&0xFF), за которым следует несколько десятков возможных значений кода команды. Вот, к примеру, команда JMP. Она имеет код 27 и вызывает безусловный переход по адресу, указанному в переменной addr:
case 27: {
this.REG_PC = addr-1;
break;
}
Команда вызова подпрограммы JSR отличается тем, что перед переходом текущее значение счетчика команд отправляется в стек. Метод сохранения в стеке (push) вызывается дважды, потому что каждая ячейка стека вмещает не более восьми разрядов. Шестнадцатиразрядный адрес приходится сохранять в два приема — сначала старшие восемь разрядов, затем младшие:
case 28:{
this.push((this.REG_PC>>8)&255);
this.push(this.REG_PC&255);
this.REG_PC = addr-1;
break;
}
При возврате из подпрограммы (команда RTS) половинки адреса добывают из стека в обратном порядке:
case 42:{
this.REG_PC = this.pull();
this.REG_PC += (this.pull()<<8);
if (this.REG_PC==0xFFFF) return; // return from NSF play routine:
break;
}
Некоторые команды отвечают за операции с содержимым регистров и памятью. Вот, например, команда STA, сохраняющая значение аккумулятора по адресу, указанному в аргументе команды:
case 47:{
this.write(addr, this.REG_ACC);
break;
}
Такой метод эмуляции называется интерпретацией. Он самый простой, самый распространенный и относительно неэффективный — но не настолько, чтобы это вызывало реальные затруднения. Справедливости ради нужно упомянуть два других метода: динамическую и статическую рекомпиляцию. В отличие от интерпретатора, рекомпилятор не исполняет команды по одной, а сперва перемалывает код в более удобную форму: например, в тот же JavaScript, который можно скормить JIT-компилятору.
При статической рекомпиляции обработке подлежит вся программа целиком. Для эмуляции ретрокомпьютеров, лихо мешающих код с данными и не гнушающихся менять программу на ходу, этот метод не годится. Динамические рекомпиляторы перерабатывают программы по частям. Они накапливают команды до тех пор, пока не встретят операцию ветвления, после чего компилируют и исполняют полученный код. В будущем динамический рекомпилятор может запускать уже скомпилированные фрагменты кода — это быстрее.
Такой подход эксплуатирует jsSMS — эмулятор Sega Master System, восьмибитного предшественника «Мегадрайва». Разработчик jsSMS утверждает, что при помощи динамической рекомпиляции ему удалось ускорить работу программы в пять-шесть раз.
Все просто и понятно? Так и должно быть. JSLinux ничуть не сложнее. Да, он вынужден поддерживать больше команд, но сами команды почти столь же прямолинейны, как команды MOS 6502. В конечном счете, для того, чтобы разработать такую программу, нужно обладать только железным терпением и болезненной любовью к чтению скучных интеловских спецификаций. Нужное сочетание качеств встречается не так уж часто, но если оно есть, достаточно строго следовать правилам, и Linux заработает.
С эмуляцией NES дело обстоит несколько иначе. Смоделировать MOS 6502 совсем нетрудно, но это не конец, а только начало пути. Когда работаешь с ретрокомпьютерами, прямолинейность спецификаций быстро заканчивается, и тогда ты остаешься один на один с хтоническим неевклидовым безумием, которое царит за их пределами. Оно чуждо всему, что существует сейчас, но его нужно понять, а затем воспроизвести во всех нездоровых и пугающих деталях — иначе Марио не прыгать.
Суди сам: в исходниках эмулятора Беллара около семи тысяч строк, шесть из которых занимает описание процессора. Это программа, которая способна запустить полноценный современный Linux. JSNES тем временем эмулирует простенькую приставку тридцатилетней давности, основанную на примитивном процессоре с двумя регистрами. Здравый смысл подсказывает, что такой эмулятор обязан быть проще, но ретрокомпьютеры и здравый смысл — понятия несовместные. По величине кода JSNES почти не уступает JSLinux, и большая его часть не реализует спецификации — она борется с безумием.
Atari
Послушай историю. В сентябре 1977 года компания Atari выпустила домашний компьютер под названием Atari 2600. Краткого знакомства с этим легендарным прибором достаточно для того, чтобы осознать весь ужас ситуации, в которой находятся разработчики эмуляторов ретрокомпьютеров.
Atari 2600 использовала урезанную версию уже знакомого нам процессора MOS 6502 и обладала оперативной памятью величиной 128 байт (этот абзац там не уместился бы — он в три раза длиннее). Кроме того, к устройству можно было подключать картриджи с ПЗУ объемом четыре килобайта, а его видеочип позволял отображать на телеэкране изображение с разрешением 160 на 190 пикселей и 128 цветами на пиксель.
А теперь самое важное: у Atari 2600 не было видеопамяти. 160 на 190 пикселей. 128 цветов. И ни единого байта для того, чтобы их хранить. Как это возможно?
Сейчас 2015 год. Те, кто читает эту статью, скорее всего, не видели телевизора с электронно-лучевой трубкой уже лет десять — а некоторые, вполне возможно, и никогда. Это странно, но, боюсь, для того, чтобы объяснить, как работало видео Atari 2600, нужно начинать с самого начала — с магнитов.
Изображение на экране телевизора с электронно-лучевой трубкой — это иллюзия. В действительности телевизоры двадцатого века могли заставить светиться лишь крохотный участок экрана. Они делали это при помощи пучка электронов, который сфокусирован и направлен в нужную точку при помощи мощных магнитов. Чтобы построить изображение, электронный пучок с огромной скоростью бежал по экрану: от верхнего левого к верхнему правому углу, затем возвращался налево, опускался чуть ниже, повторял путь к противоположному краю, и так до тех пор, пока не достигал самого низа. От того, какой сигнал поступает в телевизор, зависела интенсивность пучка и, соответственно, яркость «пикселей», которых он касается.
В большинстве компьютеров, появившихся после 1980 года, программист надежно отделен от электронных пучков и магнитов несколькими слоями абстракций. Но Atari 2600 так примитивна, что не может позволить себе такой роскоши. Луч, бегущий по люминофору, — это главный герой любой программы для этой платформы. Разработчикам приходилось непрерывно следить за тем, где именно находится луч именно сейчас, чтобы в нужный момент дать команду и заставить его зажечь на экране несколько пикселей (на самом деле их задача была еще сложнее, но не будем отвлекаться на детали).
Процессор Atari 2600 не отличался производительностью, поэтому каждое действие приходилось отмерять с точностью до такта. Стоит ошибиться на долю секунды, и картинки не получится. Это создавало трудновообразимые сейчас затруднения и ограничения. Вот пример: на пару команд, загружающих из памяти значение, а затем устанавливающих регистр, который задает «цвет» луча, уходит пять тактов. За пять тактов луч успевает окрасить пятнадцать пикселей. Следовательно, менять цвет можно не чаще одиннадцати раз на каждой строке — и это в том случае, если программа не будет делать ничего, кроме изменения цветов.
Еще не страшно? Сейчас будет: различные телевизионные стандарты подразумевают разную скорость развертки и разное количество кадров в секунду. У других платформ об этой разнице заботится видеоадаптер, но с Atari 2600 так не выйдет. Особенности PAL, SECAM и NTSC рушат тщательно подсчитанные по продолжительности комбинации команд и заметно влияют на функциональность программ. Апокалиптическую картину довершают разработчики игр, которые быстро научились применять скудные возможности Atari 2600 не по назначению, выжимая из приставки то, на что она по всем формальным признакам неспособна.
Эмулятор Atari 2600 должен учитывать все. Он должен знать, с какой скоростью бежит луч по экрану телевизора. Он должен эмулировать мельчайшие особенности телевизионных стандартов. Он должен точно синхронизировать движение луча (которого вообще-то нет) с продолжительностью исполнения различных команд и задержками, которые вносит электроника Atari 2600, — о них, разумеется, в документации ни слова, но игры отлаживались на реальном железе и, конечно, развалятся на части, если где-то не хватит микросекунды. Наконец, эмулятор должен точно имитировать все недостатки и ошибки устройства, на которые привыкли полагаться разработчики приложений.
Игры для Atari 2600
Это парадокс: загрузить Linux при помощи JavaScript проще, чем заставить работать примитивные доисторические игры. По крайней мере, интерфейс IDE и алфавитно-цифровой терминал предсказуемы. Они не зависят от фазы Луны, длины бороды программиста и тысячи других факторов. Когда имеешь дело с железом, которое придумано больше тридцати лет назад, все наоборот. Нужно думать о бесчисленных недокументированных особенностях железа, связанных с синхронизацией, графикой и звуком. Именно в этих особенностях — главная трудность.
Atari 2600 — это экстремальный случай, но и в NES хватает сюрпризов. Один из вечных источников проблем — картриджи. Дело в том, что картриджи NES часто содержали не только ПЗУ с кодом игр, но и нестандартное железо, которое могло делать все что угодно. Например, если в картридже нужно уместить больше информации, чем позволяет выделенное под него адресное пространство, в него встраивали специальный чип для переключения банков данных. Другие картриджи снабжались энергонезависимыми запоминающими устройствами для сохранения отгрузок, а порой — и специальными сопроцессорами. Существовали десятки разновидностей картриджей. Каждая из них работала по-своему, и, если эмулятор не поддерживает нужную, игра не пойдет.
Другая трудность связана опять-таки с графикой. За нее в NES отвечал особый сопроцессор, работающий с тактовой частотой, которая втрое превышала тактовую частоту центрального процессора. Он собирал отображаемое на экране изображение из фона и 64 спрайтов величиной 8 на 8 или 8 на 16 пикселей. Программист определял, где хранится спрайт, в какой части экрана его следует вывести, нужно ли сдвинуть фон и если нужно, то как. Остальное делал сопроцессор.
На первый взгляд, все просто и удобно — никакого сравнения с видеоадом Atari 2600. Но, как известно, свинья грязь найдет. Разработчиков игр не устраивали недостатки NES. Видеопроцессор приставки начинал сбоить, когда на одной строке оказывалось больше восьми спрайтов, и не умел скроллить различные части экрана независимо друг от друга. Постепенно программисты научились обходить эти ограничения. Оказалось, если подменять видеоданные, пока видеопроцессор отрисовывает кадр, от него можно добиться большего. Беда в том, что эта чудесная идея возвращает нас в мрачное царство телеразвертки и подсчета тактов.
О преодолении подобных трудностей можно рассуждать вечно. Мы не будем это делать, а сосредоточимся на более практических материях: эмуляции видеовывода приставки в браузере. JSNES использует для формирования изображения тег Canvas:
self.root = $('<div></div>');
self.screen = $('<canvas class="nes-screen" width="256" height="240"></canvas>').appendTo(self.root);
self.canvasContext = self.screen[0].getContext('2d');
self.canvasImageData = self.canvasContext.getImageData(0, 0, 256, 240);
Переменная canvasImageData позволяет обращаться к отдельным пикселям Canvas. Каждый пиксель описывается четырьмя целыми числами: по одному на каждый из трех цветовых компонентов и один — на прозрачность (его можно игнорировать).
Игры для NES
За вывод изображения отвечает метод writeFrame. Он получает на входе два видеобуфера, один из которых содержит текущий кадр, а другой — прошлый. Оба буфера — это обычные массивы JavaScript, но изображение в них хранится в несколько другом формате, чем в canvasImageData. Если в canvasImageData пиксель состоит из четырех элементов массива, то в buffer и prevBuffer каждый пиксель — это одно целое число. Цвета закодированы в трех байтах этого числа и извлекаются при помощи операции побитового сдвига. Для экономии времени это делается только для тех элементов buffer, которые изменились по сравнению с prevBuffer. Метод writeFrame заканчивается командой putImageData, отображающей сформированное изображение в браузере:
writeFrame: function(buffer, prevBuffer) {
var imageData = this.canvasImageData.data;
var pixel, i, j;
for (i=0; i<256*240; i++) {
pixel = buffer[i];
if (pixel != prevBuffer[i]) {
j = i*4;
imageData[j] = pixel & 0xFF;
imageData[j+1] = (pixel >> 8) & 0xFF;
imageData[j+2] = (pixel >> 16) & 0xFF;
prevBuffer[i] = pixel;
}
}
this.canvasContext.putImageData(this.canvasImageData, 0, 0);
}
Вот, в принципе, и все. Кроме разбора исходников JSNES, интересующимся эмуляцией ретрокомпьютеров при помощи JavaScript можно порекомендовать серию статей о разработке джаваскриптового эмулятора GameBoy, которую написал британский программист Имран Назар. После этого можно вооружаться информацией с описанием интересующей платформы (энтузиасты, как правило, давно во всем разобрались) и браться за дело. Если страшно, можно начать с простого: написать эмулятор какой-нибудь виртуальной машины — скажем, байт-кода Z machine. Для того чтобы сделать первый шаг, самое то.
Где искать эмуляторы
JBacteria эмулирует ZX Spectrum и представляет собой джаваскриптовый порт эмулятора Bacteria, который интересен своими крохотными размерами: всего четыре килобайта. На сайте JBacteria выложены десятки игр для «Спектрума», которые можно тут же открыть в браузере.
Чтобы запустить этот эмулятор GameBoy Advance, придется запастись «ромами» игр с пиратских сайтов. Без них JS-VBA-M работать не будет. Подобной тактики придерживаются многие авторы эмуляторов. Они надеются, что таким образом им удастся избежать внимания юристов.
Этот радиолюбительский компьютер, разработанный в СССР около тридцати лет назад, построен на базе восьмиразрядного микропроцессора КР580ВМ80А — советского клона Intel 8080. На сайте эмулятора выложены многочисленные игры для РК — например, незабвенный «Клад» (Lode Runner).
По этой ссылке располагается полнейший каталог ретроэмуляторов, написанных на JavaScript. Кроме предсказуемых игровых приставок, есть и экзотика: эмуляторы PDP-11 и Burroughs B5500, машин, относящихся к более ранним поколениям вычислительной техники. |
roundup
eliminate bugs and weeds from shell scripts roundup
I have a sql column that is set to money, this has four numbers after the decimal point. I am calculating this column in an update query, I would like to roundup this column. example: 2388.6796, should be 2389
Math.Ceiling(0.5);
SqlCommand cmd1 = new SqlCommand("UPDATE Products SET [ThirdPartyRate] = 'Ceiling(" + GridView1.Rows[SelectedIndex].Cells[6].Text.ToString() + "' * [Price]) WHERE [Supplier] like '" + GridView1.Rows[SelectedIndex].Cells[0].Text.ToString() + "' ", con);
Source: (StackOverflow)
I have got this script which multiplies value inputted into input field by dropdown's value assigned through span data-val.
How can I make the script show result rounded up to 3 decimals?
$(document).ready(function () {
function showTab(name) {
$('div.fruit').hide();
var $div = $('#' + name).show();
var number = parseInt($('.number').val(), 0);
$('span', $div).each(function () {
$(this).text($(this).data('val') * number);
});
}
$('#update').click(function() {
showTab($('#dropdown').val());
});
showTab($('#dropdown').val());
});
Source: (StackOverflow)
How to round time to nearest hour in Excel, for example:
67:45:00 will be 68:00:00
and
53:14:00 will be 53:00:00
regards
Source: (StackOverflow)
I have a value like this:
$value = 2.3333333333;
and I want to round up this value into like this:
$value = 2.35;
I already tried round, ceil and etc but the result is not what I expected.
Please anyone help.
Thanks
Source: (StackOverflow)
I am working on a central login system for an application that is written in Django, with a MediaWiki wiki and a Roundup bugtracker. At present, the method I am thinking of going with is to use the AuthDjango extension for Mediawiki (https://bitbucket.org/toml/django-mediawiki-authentication/src) and hack up something similar for Roundup. This method relies on the creation of a SessionProfile model in Django which maps session IDs (taken from cookies) to User instances, and MediaWiki/Roundup accesses the data by directly querying the Django database.
The advantage of this are that login, session and logout processes across all three apps are easily unified. However, the issue I have is that it relies on MediaWiki/Roundup having stored credentials for the Django database, and the requirements to get access to the MediaWiki or Roundup shell accounts are intentionally less stringent than for the main Django app (currently only one person has Django production access). So admins of the MediaWiki/Roundup instance (i.e. with shell access), or anyone who broke in via a remote exploit, would potentially be able to hijack user accounts on the main site.
So my question is: does anyone know of a better way to integrate the login mechanisms of these systems? Or, how would I be able to give MediaWiki/Roundup secure access to the Django database while minimizing the potential for abuse by people with access to the MediaWiki shell?
Source: (StackOverflow)
Is there a Scrum plugin for the Roundup Issue Tracker similar to Agilo for Trac? I realize that Roundup is an issue tracking system, whereas Trac is designed to be an integrated project management, SCM, and issue tracker. Therefore, maybe a better question would beâIs anyone aware of a, preferably Python based, Scrum tool to use in conjunction with Roundup? Although, that may be a bit too subjective for this forum.
Source: (StackOverflow)
I think I just need a bit more guidance than what the documentationgives, and it's quite hard to find anything relating to Roundup andApache specifically.
All i'm trying to do currently is to have Apache display what thestand-alone server does when running roundup-serversupport=C:/Roundup/
Running windows XP with apache 2.2 and python 2.5 roundup 1.4.6
I don't really have any further notes of interest, so if anyone has alreadygot this running, could you please show me your configuration and i'llsee how i go from there :) I don't expect anyone to analyse the 403 forbidden error I get before I'm sure my httpd.conf file is correct first
Thanks in advance
Source: (StackOverflow)
An example signature may be:
On Tue, Mar 20, 2012 at 2:38 PM, Johnny Walker <johnny.talker@gmail.com> wrote:
And then follows the quoted reply. I do have a discrete sensation this is locale specific though which makes me a sad programmer.
The reason I ask for this is because roundup doesn't strip these correctly when replying through gmail to an issue. And I think origmsg_re is the config.ini variable I need to set alongside keep_quoted_text = no to fix this.
Right now it's the default origmsg_re = ^[>|\s]*-----\s?Original Message\s?-----$
Edit: Now I'm using origmsg_re = ^On[^<]+<.+@.+>[ \n]wrote:[\n] which works with some gmail clients that break lines that are too long.
Source: (StackOverflow)
i have the excel script and the jason page, to not make this huge im just going to place the important things.
function pull(toonName,toonRealm) {
if(!toonName || !toonRealm) {
return ""
}
var toonJSON = UrlFetchApp.fetch("us.battle.net/api/wow/character/"+toonRealm+"/"+toonName+"?fields=items,talents,statistics,stats,progression,audit")
var toon = JSON.parse(toonJSON.getContentText())
var getStats = function(sta) {
var crit = sta.crit,
haste = sta.haste,
mastery = sta.mastery,
spirit = sta.spr,
multi = sta.multistrike,
vers = sta.versatilityDamageDoneBonus
return [crit, haste, mastery, spirit, multi, vers]
}
var Stats = getStats(toon.stats)
var toonInfo = new Array(Stats[0], Stats[1], Stats[2], Stats[4], Stats[5], Stats[3]
)
return toonInfo;
}
what i got is a number with a lot of decimals for example: 15.154545, 12.566666, 19.97091.
and what i want is to convert that to: 15.15% 12.56% 19.97%
either from the script or from the excel.
The thing is that when i try to convert that from excel selecting the number format of %, it gives me: 1515.45%, 1256.66%, 1997.09%
but if i chose the number format it gives me the 15.15, 12.56 (w/o the "%")
and when i tried to inject that form the script like:
var toonInfo = new Array(Stats[0]+"%", Stats[1]+"%", Stats[2]+"%", Stats[4]+"%", Stats[5]+"%", Stats[3]
excel couldnt edit the numbers.So maby this is a dumb question but i dont know how to do it.
Source: (StackOverflow)
I was install Roundup 1.4 by Debian Squeeze official repo and want to run it with my Apache server using mod_wsgi. Host configuration:
<VirtualHost *:80>
ServerName support.domain.com
WSGIScriptAlias / /var/roundup/support/apache/roundup.wsgi
WSGIDaemonProcess support.roundup user=roundup group=roundup threads=25
WSGIProcessGroup support.roundup
<Directory /var/roundup/support>
<Files roundup.wsgi>
Order allow,deny
Allow from all
</Files>
</Directory>
# ... some logging configuration
</VirtualHost>
I was install tracker in /var/roundup/support using roundup-admin install, configure it and next initialise using roundup-admin initialise. Then I was created apache/roundup.wsgi:
from roundup.cgi.wsgi_handler import RequestDispatcher
tracker_home = '/var/roundup/support'
application = RequestDispatcher(tracker_home)
When opening my site at http://support.domain.com (ofcourse this url is bit different) I have HTTP response 500 Internal Server Error and log with:
mod_wsgi (pid=17433): Exception occured processing WSGI script '/var/roundup/support/apache/roundup.wsgi'.
RuntimeError: response has not been started
What's going on? How to run roundup with wsgi (not cgi) properly? Or where to look why response has not been started?
EDIT
Roundup's install manual says that wsgi handler would look like this:
from wsgiref.simple_server import make_server
# obtain the WSGI request dispatcher
from roundup.cgi.wsgi_handler import RequestDispatcher
tracker_home = 'demo'
app = RequestDispatcher(tracker_home)
httpd = make_server('', 8917, app)
httpd.serve_forever()
But this make no response. Browser loading it forever without message or server log. I think starting another server from script running by apache module isn't good idea. So I tried another code sample:
from roundup.cgi.wsgi_handler import RequestDispatcher
tracker_home = '/var/roundup/support'
application = RequestDispatcher(tracker_home)
from flup.server.fcgi import WSGIServer
WSGIServer(application).run()
But this throws some errors like:
WSGIServer: missing FastCGI param REQUEST_METHOD required by WSGI!
WSGIServer: missing FastCGI param SERVER_NAME required by WSGI!
WSGIServer: missing FastCGI param SERVER_PORT required by WSGI!
WSGIServer: missing FastCGI param SERVER_PROTOCOL required by WSGI!
There must be a way to run my application from RequestDispatcher...
Source: (StackOverflow)
This question already has an answer here:
Looking the code of the malloc we can see that it performs round up like this:
nunits = (nbytes + sizeof(Header) - 1) / sizeof(Header) + 1;
I understand why we perform
(nbytes + sizeof(Header)) / sizeof(Header)
But the other hand I don't understand why we need to subtract -1 in numerator and +1 in denominator ?
Source: (StackOverflow)
I have a few modifications I need for my cost calculator:
Is there a way to add a Math.celi to the no of cans so it rounds up to the nearest whole number, providing the minimum cans needed?
What ever the minimum required cans are, can the cost reflect that by multiplying by 18.23 and giving the real amount?
$('input').keyup(function () { // run anytime the value changes
var firstValue = parseFloat($('#width').val()); // get value of field
var secondValue = parseFloat($('#height').val());
var thirdValue = parseFloat($('#per-can').val()); // convert it to a float
var forthValue = parseFloat($('#cost').val());
var fithValue = parseFloat($('#size').val());
var canCount = firstValue * secondValue / thirdValue;
$('#added').html((canCount * forthValue).toFixed(2));
$('#cans').html(canCount.toFixed(2));
if (Math.ceil(canCount) < 2) {
$('#error').html("Need at least 1!");
} else {
$('#error').empty();
}
});
http://jsfiddle.net/5xzSy/504/ thanks
Source: (StackOverflow)
I need to print data from a DataGridView on both sides of a preprinted form but:
Each side has different arrangement for that info.
Each side can only hold info from tree rows, so:
1st, 2nd and 3rd row go on side 1;
4th, 5th and 6th row go on side 2;
7th, 8th and 9th row go on side 1;
10th, 11th and 12th go on side 2; and so on.
I will select which group to print.
Iâm planning to do it this way:
((row.Index) +1) / 3,
round it up, with no decimals, to get an integer, (like in the above excelimage),
MOD that integer by 2, (like in the above excel image).
If the result of that MOD by 2 is 1, then it will print Side 1 arrangement,if the result of that MOD by 2 is 0, then it will print Side 2 arrangement.
How do I do it in C#? I'm using VS2010 Express Edition. Also, Iwanted to use System.Math.Ceiling but I get a Namespace, decimal,double-precision and floating-point number warnings or errors.
Source: (StackOverflow)
I want to roundup value according to the 3rd decimal point. It should always take the UP value and round. I used Math.Round, but it is not producing a result as i expected.
Scenario 1
var value1 = 2.526;
var result1 = Math.Round(value1, 2); //Expected: 2.53 //Actual: 2.53
Scenario 2
var value2 = 2.524;
var result2 = Math.Round(value2, 2); //Expected: 2.53 //Actual: 2.52
Scenario 1 is ok. It is producing the result as i expected.In the 2nd scenario I have amount as 2.522. I want to consider 3rd decimal point (which is '4' in that case) and it should round UP. Expected result is 2.53
No matter what the 3rd decimal point is (whether it is less than 5 or greater than 5), it should always round UP.
Can anyone provide me a solution? I don't think Math.Round is helping me here.
Source: (StackOverflow)
Let's say we have an NSDecimal constant called total and it contains the value of 3.33333333 divided by 10/3. 10 and 3 are both NSDecimalNumber.We want the total to be 4 as an NSDecimalNumber in Swift.
let total = ten/three
// the ideal total would be rounded up whenever there is a floating
in the doc. we have
func NSDecimalRound(_ result: UnsafeMutablePointer<NSDecimal>,
_ number: UnsafePointer<NSDecimal>,
_ scale: Int,
_ roundingMode: NSRoundingMode)
maximumDecimalNumber()
Which one is the best fit for calculator with currency style? Please include an example each how to use them if you know so. Thank you .
Source: (StackOverflow) |
I’ve been working on new project recently called AstroChallenge. While the details of what exactly AstroChallenge is will have to come later, rest assured, it has to do with Astronomy.
One of the bits of information I’m interested in is whether a particular celestial object is visible in the sky or not. Given an observer’s latitude, longitude and elevation and an object’s right ascension and declination it becomes a straightforward calculation.
However, there are libraries written by smarter people than I and it would be a good idea to use them. So instead if spending my time carefully coding maths, I can simply:
$pip install pyephem
and
import ephem
into my project.
Now the work is done for me. Isn’t the modern age great?
PyEphem is a great python library for performing calculations on all sorts of celestial objects including planets, moons, comets, asteroids, stars and deep space objects. Once you input your observation date, time and location some of the interesting functions you can run include:
Next transit
Altitude, Azimuth
Distance from Earth, Sun, other bodies
Current Constellation
Phase, day, month and year
And so on. When you set up an observer, you can even supply dates in the path so you can, for example, find the positions of the moons of Jupiter on February 15, 1564.
Since AstroChallenge is a webapp written in Django we have data models for things like deep space objects on which we can place handy methods to get information from pyephem:
(fields truncated for readability)
class DeepSpaceObject(models.Model): ra_hours = models.IntegerField() ra_minutes = models.FloatField() dec_sign = models.CharField(max_length=1, choices=(('+', '+'), ('-', '-')), default="+") dec_deg = models.IntegerField() dec_min = models.FloatField() @property def fixed_body(self): """ Return a FixedBody object which PyEphem uses to perform calculations """ object = ephem.FixedBody() object._ra = "{0}:{1}".format(self.ra_hours, self.ra_minutes) object._dec = "{0}{1}:{2}".format(self.dec_sign, self.dec_deg, self.dec_min) return object def observation_info(self, observer): """ Given an observer, perform the calculations we are interested in and return them as a dictionary """ p_object = self.fixed_body p_object.compute(observer) up = True if ephem.degrees(p_object.alt) > 0 else False return { 'alt': str(p_object.alt), 'az': str(p_object.az), 'up': up, 'neverup': p_object.neverup, 'rise': timezone.make_aware(observer.next_rising(p_object).datetime(), pytz.UTC) if p_object.rise_time else None, 'set': timezone.make_aware(observer.next_setting(p_object).datetime(), pytz.UTC) if p_object.set_time else None }
Some things to note:
An object is “visible” if it’s Altitude is greater than 0, meaning it is above the horizon. If it still light out, or you live in a light polluted area, you’re probably still out of luck, though,
PyEphem’s
Observer.next_rising/settingmethods may returnNone, that means an object either never rises (as can be determined usingBody.neverup) or never sets.
The Observer data can be provided using a simple method on a UserProfile model:
class UserProfile(models.Model): user = models.OneToOneField(User, editable=False) timezone = TimeZoneField(default="UTC") lat = models.FloatField("latitude", default=0.0) lng = models.FloatField("longitude", default=0.0) elevation = models.IntegerField(default=0) @property def observer(self): observer = ephem.Observer() observer.lat, observer.lon, observer.elevation = str(self.lat), str(self.lng), self.elevation return observer @property def sunset(self): sun = ephem.Sun() sun.compute(self.observer) return timezone.make_aware(self.observer.next_setting(sun).datetime(), pytz.UTC)
Notice the observer property just returns an observer, so we can now supply it in our views to a Celestial object and get the information we need. Anotherhandy property, sunset uses the observer property to compute the time atwhich the sun will be setting for this user. PyEphem rocks. |
GUI textfield example
jcallum
Hi -
I am looking for an example of code to read/get text from a textfield using the ui module. I have tried a number of things but can't seem to figure it out.
This is one attempt:
import ui
v = ui.load_view('descent_calc')
v.present('sheet')
t1 = 120
def text_entered(sender):
'@type sender: ui.Textfield'
t1 = str(sender.text('textfield1'))
print(t1)
If someone could point me to some good examples it would be great. Thanks.
John.ccc
@jcallum , I hope this helps. I have tried to keep it simple. But it just uses a delegate as described in the help file for the ui.TextField.
import ui
class MyTextFieldDelegate (object):
def textfield_should_begin_editing(self, textfield):
return True
def textfield_did_begin_editing(self, textfield):
pass
def textfield_did_end_editing(self, textfield):
pass
def textfield_should_return(self, textfield):
textfield.end_editing()
return True
def textfield_should_change(self, textfield, range, replacement):
return True
def textfield_did_change(self, textfield):
print(textfield.text) #only changed this
#pass
f = (0, 0, 300, 480)
v = ui.View(frame=f)
tf = ui.TextField(frame = v.bounds)
tf.height =32
tf.delegate = MyTextFieldDelegate()
v.add_subview(tf)
v.present('sheet')
jcallum
Thanks very much. That works. I need to study it carefully to see why. This OOP is not intuitively obvious to me!
John.
ccc
Another approach...
import ui
user_value = '120'
def text_entered(sender):
user_value = sender.text
print(user_value)
text_field = ui.TextField()
text_field.action = text_entered
text_field.text = user_value
text_field.present()
ramvee
@jcallum So True.
As much as I love Pythonista, and though UI Help Module has the details, it is not intuitive for beginners like me.
But going through 2000 odd posts in this forum really helps. And friends on this forum are really helpful, including the developer @omz !
Also because several elements in UI module have different capabilities of action and the fact that some of these require delegates makes it complex.
Maybe next update will have more examples pertaining to UI Module in documentation. :)
ccc
I highly recommend the
ui-tutorialandscene-tutorialfor learning theses Pythonista modules: https://github.com/humberryccc
Guys, I would just say don't be scared of the delegates. I admit I hard a hard time with them when I started. But it was more out of fear than anything else. But in reality they are so easy to use and they make a lot of sense to have them when you want more control over a object.
Still @ccc approach here is a lot more straight forward for the problem at hand. Also keep in mind, these delegate classes can be copied directly from the help file. No need to memorise them.
Below is a better example of using a delegate, I think. But it also shows how flexible the delegates are. The fact that you can point the delegate to your own class is very nice. Most of the lines are setting up the view. To make a real,example you need it. But 95% of the lines in the example are just a pattern so to speak.
This is just another way. Not saying it's the best way.
# Pythonista Forum - @Phuket2
import ui
class MyClass(ui.View):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.make_view()
self.value = None
def make_view(self):
tf = ui.TextField(frame = self.bounds.inset(10, 10))
tf.height =32
tf.delegate = self
tf.flex = 'w'
self.add_subview(tf)
# because the delegate is pointed to this class, tf.delegate=self,
# then you can define the delegate methods here. in the docs it
# explains you only need to define the methods you need. the meaning
# being is that the the caller is checking to see if the method
# exists before calling it.
def textfield_did_change(self, textfield):
self.value = textfield.text
if __name__ == '__main__':
w, h = 600, 800
f = (0, 0, w, h)
style = 'sheet'
mc = MyClass(frame=f, bg_color='white')
mc.present(style=style, animated=False)
mc.wait_modal()
print(mc.value)
```
ccc
Btw, in the next version of Pythonista the make_view can be like below. But in the latest beta you can pass all those Params to any ui.View subclassed object
def make_view(self):
tf = ui.TextField(frame = self.bounds.inset(10, 10), height=32, delegate=self,flex='w')
#tf.height =32
#tf.delegate = self
#tf.flex = 'w'
self.add_subview(tf)
|
Let's learn Aardvark
Welcome to Aardvark, the language that has entranced programmers by its simplicity and amazingness for the last few days.
My goal is to by the end of this lesson have taught you the basics of the Aardvark language and get you on a course to become an amazing Aardvark developer.
In programming, you usually start with a Hello World program, but let's mix it up this time, lets start by learning how to write a program that takes the user's username as input and outputs a random welcome message. But first, we need to learn basic input and output.
This code will output This is my Aardvark program!:
output("This is my Aardvark program!")
You see, not that hard, now don't forget the quotes, it won't work right without them. Let's look at what this code does, output has parentheses, which means its a function, and then inside the quotes are the message that shows up on the screen. Hmm, I wonder if I can change what's in the quotes and it will change the message, lets try it:
output("This is a different message")
If you run that program you will see that it worked! Now let's learn how to take user input:
input("Enter your username: ")
If you run that code, you will see that it will give that message and then let you type in an answer. But how do we store that answer in our program? We use variables, variables store data for use later in the code. So if we add a = to the beginning of that then it will store the input in the variable a, lets try it:
a = input("Enter your username: ")
How do we know if it worked? Well, let's try to output the data inside a. Try this code:
a = input("Enter your username: ")
output(a)
When you output variables, you don't need those quotes. Let's run it. When we run it, it will ask for our username and then output what we typed in. We can already get their username, we still need the random welcome message, lets first start with just a welcome message. If we output "Welcome, " before we output what they typed in, it would say Welcome, plus their username. Let's try it:
a = input("Enter your username: ")
output("Welcome, ")
output(a)
It worked! Let's simplify it, just do "Welcome, " + a instead of doing it on separate lines. Try this:
a = input("Enter your username: ")
output("Welcome, " + a)
It worked! We now have our username input and our welcome message, but what about the random. How can we make it do something random?
In Aardvark, the tools module has some functions to help us do random stuff. But how do we include a module? Try this code:
#include tools
It makes all the functions in the tools module available in our program, lets add it to our code:
#include tools
a = input("Enter your username: ")
output("Welcome, " + a)
Now, what is the function to do random stuff? In Aardvark, you can use the randomchoice function from the tools module to make random choices. randomchoice takes a list of possible choices as its one argument. How do we make a list in Aardvark? Just put it in between [ and ] and separate the items by commas. Let's try this code:
#include tools
username = input("Enter your username: ")
message = randomchoice("Welcome, ", "Hello, ", "Have a good day, ")
output(message + username)
It worked! We have reached our goal! |
本文出自“Python为什么”系列,请查看全部文章
在写上一篇《Python 为什么要有 pass 语句?》时,我想到一种特别的写法,很多人会把它当成 pass 语句的替代。在文章发布后,果然有三条留言提及了它。
所谓特别的写法就是下面这个:
# 用 ... 替代 pass
def foo():
...
它是中文标点符号中的半个省略号,也即由英文的 3 个点组成。如果你是第一次看到,很可能会觉得奇怪:这玩意是怎么回事?(PS:如果你知道它,仔细看过本文后,你同样可能会觉得奇怪!)
1、认识一下“...”内置常量
事实上,它是 Python 3 中的一个内置对象,有个正式的名字叫作——Ellipsis,翻译成中文就是“省略号”。
更准确地说,它是一个内置常量(Built-in Constant),是 6 大内置常量之一(另外几个是 None、False、True、NotImplemented、__debug__)。
关于这个对象的基础性质,下面给出了一张截图,你们应该能明白我的意思:
“...“并不神秘,它只是一个可能不多见的符号型对象而已。用它替换 pass,在语法上并不会报错,因为 Python 允许一个对象不被赋值引用。
严格来说, 这是旁门左道,在语义上站不住脚——把“...”或其它常量或已被赋值的变量放在一个空的缩进代码块中,它们是与动作无关的,只能表达出“这有个没用的对象,不用管它”。
Python 允许这些不被实际使用的对象存在,然而聪明的 IDE 应该会有所提示(我用的是 Pycharm),比如告诉你:Statement seems to have no effect 。
但是“...”这个常量似乎受到了特殊对待,我的 IDE 上没有作提示。
很多人已经习惯上把它当成 pass 那样的空操作来用了(在最早引入它的邮件组讨论中,就是举了这种用法的例子)。但我本人还是倾向于使用 pass,不知道你是怎么想的呢?
2、奇怪的 Ellipsis 和 ...
... 在 PEP-3100 中被引入,最早合入在 Python 3.0 版本,而 Ellipsis 则在更早的版本中就已包含。
虽然官方说它们是同一个对象的两种写法,而且说成是单例的(singleton),但我还发现一个非常奇怪的现象,与文档的描述是冲突的:
如你所见,赋值给 ... 时会报错SyntaxError: cannot assign to Ellipsis ,然而 Ellipsis 却可以被赋值,它们的行为根本就不同嘛!被赋值之后,Ellipsis 的内存地址以及类型属性都改变了,它成了一个“变量”,不再是常量。
作为对比,给 True 或 None 之类的常量赋值时,会报错SyntaxError: cannot assign to XXX,但是给 NotImplemented 常量赋值时不会报错。
众所周知,在 Python 2 中也可以给布尔对象(True/False)赋值,然而 Python 3 已经把它们改造成不可修改的。
所以有一种可能的解释:Ellipsis 和 NotImplemented 是 Python 2 时代的遗留产物,为了兼容性或者只是因为核心开发者遗漏了,所以它们在当前版本(3.8)中还可以被赋值修改。
... 出生在 Python 3 的时代,或许在将来会完全取代 Ellipsis。目前两者共存,它们不一致的行为值得我们注意。我的建议:只使用"..."吧,就当 Ellipsis 已经被淘汰了。
3、为什么要使用“...”对象?
接下来,让我们回到标题的问题:Python 为什么要使用“...”对象?
这里就只聚焦于 Python 3 的“...”了,不去追溯 Ellipsis 的历史和现状。
之所以会问这个问题,我的意图是想知道:它有什么用处,能够解决什么问题?从而窥探到 Python 语言设计中的更多细节。
大概有如下的几种答案:
(1)扩展切片语法
官方文档中给出了这样的说明:
Special value used mostly in conjunction with extended slicing syntax for user-defined container data types.
这是个特殊的值,通常跟扩展的切片语法相结合,用在自定义的数据类型容器上。
文档中没有给出具体实现的例子,但用它结合__getitem__() 和 slice() 内置函数,可以实现类似于 [1, ..., 7] 取出 7 个数字的切片片段的效果。
由于它主要用在数据操作上,可能大部分人很少接触。听说 Numpy 把它用在了一些语法糖用法上,如果你在用 Numpy 的话,可以探索一下都有哪些玩法?
(2)表达“未完成的代码”语义
... 可以被用作占位符,也就是我在《Python 为什么要有 pass 语句?》中提到 pass 的作用。前文中对此已有部分分析。
有人觉得这样很 cute,这种想法获得了 Python 之父 Guido 的支持 :
(3)Type Hint 用法
Python 3.5 引入的 Type Hint 是“...”的主要使用场合。
它可以表示不定长的参数,比如Tuple[int, ...] 表示一个元组,其元素是 int 类型,但数量不限。
它还可以表示不确定的变量类型,比如文档中给出的这个例子:
from typing import TypeVar, Generic
T = TypeVar('T')
def fun_1(x: T) -> T: ... # T here
def fun_2(x: T) -> T: ... # and here could be different
fun_1(1) # This is OK, T is inferred to be int
fun_2('a') # This is also OK, now T is str
T 在函数定义时无法确定,当函数被调用时,T 的实际类型才被确定。
在 .pyi 格式的文件中,... 随处可见。这是一种存根文件(stub file),主要用于存放 Python 模块的类型提示信息,给 mypy、pytype 之类的类型检查工具 以及 IDE 来作静态代码检查。
(4)表示无限循环
最后,我认为有一个非常终极的原因,除了引入“...”来表示,没有更好的方法。
先看看两个例子:
两个例子的结果中都出现了“...”,它表示的是什么东西呢?
对于列表和字典这样的容器,如果其内部元素是可变对象的话,则存储的是对可变对象的引用。那么,当其内部元素又引用容器自身时,就会递归地出现无限循环引用。
无限循环是无法穷尽地表示出来的,Python 中用 ... 来表示,比较形象易懂,除了它,恐怕没有更好的选择。
最后,我们来总结一下本文的内容:
... 是 Python 3 中的一个内置常量,它是一个单例对象,虽然是 Python 2 中就有的 Ellipsis 的别称,但它的性质已经跟旧对象分道扬镳
... 可以替代 pass 语句作为占位符使用,但是它作为一个常量对象,在占位符语义上并不严谨。很多人已经在习惯上接受它了,不妨一用
... 在 Python 中不少的使用场景,除了占位符用法,还可以支持扩展切片语法、丰富 Type Hint 类型检查,以及表示容器对象的无限循环
... 对大多数人来说,可能并不多见(有人还可能因为它是一种符号特例而排斥它),但它的存在,有些时候能够带来便利。希望本文能让更多人认识它,那么文章的目的也就达成了~
如果你觉得本文分析得不错,那你应该会喜欢这些文章:
本文属于“Python为什么”系列(Python猫出品),该系列主要关注 Python 的语法、设计和发展等话题,以一个个“为什么”式的问题为切入点,试着展现 Python 的迷人魅力。所有文章将会归档在 Github 上,项目地址:https://github.com/chinesehuazhou/python-whydo |
Finedays Mailにログインしようとする圭司が,ブルートフォースで試すときに作ってた.パスワードのリストを作るやつです.
著作権的なもろもろがあるのでスクショとかは載せられないのでAmazon Prime会員の方はdele 1話の28:07を見てください.
今回は思考プロセスや仕組みについて書いていきます
あまりプログラミングに明るくない方でも楽しめる感じで書いていくことを目標にします.
仕組みについて考える
作中では圭司が以下のようなコマンドで動かしています
$ ./createWordList.py yasuoka haruo haru 1974 0215 6084 44 -o p.lst
Macの方はTerminal(端末)開けばコマンドを入れる場所はすぐに分かると思います.Windowsでやりたい場合はコマンドプロンプトになるかと思います. $ じゃなくて > になるけど…
圭司がMacっぽいOSで動かしている以上今回はMacに合わせて説明していきます.Windowsでやりたい人はコマンドを読み替える部分があります.(後述)
話を戻しまして,一個ずつ左からコマンドを見ていきましょう.
./createWordList.py
これは実行するプログラムを指定している部分です。 “現在いるディレクトリのcreateWordList.pyっていうのを実行してください” という意味です。詳しい話は省略
で
yasuoka haruo haru 1974 0215 6084 44 -o p.lst
がcreateWordList.pyに渡されるデータ達となります。
ハイフンから始まるものは基本的にその後ろの語句とセットになっていて、オプションと呼ばれる、あっても無くてもいい追加設定です。
ハイフンが一つなら その後は一文字、ハイフンが二つならその後は単語が一般的になっていて、基本ハイフン二つの省略としてハイフン一つでの設定が使われます。
実際に使われているオプションは
-o p.lst
でした。
p.lstはピリオドが入っていることからなんとなくファイルだと考えられます。
そしてlstはlistの略称、pはpasswordでしょう流れ的に、すなわち-oの後にパスワードが入るファイル名が入っていると考えられます
でなぜファイル名を指定しているのかという考察に入りますが、これは割と簡単で、プログラムで作ったパスワードの組み合わせを出力する場所の指定であると考えられます。
そうなると非常に辻褄が合うようになり -oは–outputの略称であると考えられます。
ここまでの考察をまとめると、
-o p.lst
は出力先のファイル名を指定するオプションであるということが分かりました。
最後は
yasuoka haruo haru 1974 0215 6084 44 です。
実際にプログラムを書く際はこれらの組み合わせを出力されるように書きます。
プログラムの作成
実際に組み合わせられるようにまずは実行コマンドについて細かくプログラムに落とし込んでいきます。この辺からPythonの話がめっちゃ出てきます。雰囲気で楽しんでください
./createWordList.py yasuoka haruo haru 1974 0215 6084 44 -o p.lst
という形で実行する際、実際は
./createWordList.py {組み合わせるwords} {オプション}
となるのでyasuoka haruo haru 1974 0215 6084 44は何個ワードを書いてもいい、すなわち可変長引数にする必要があります。
sysモジュールでも出来ますがあんまり賢い選択ではないので今回はargparseモジュールを使って書いていきます。
なにがともあれ、まずは与えられたデータを受け取る必要があるのでその部分について書いていきましょう
import argparse parser = argparse.ArgumentParser(description=' dele 1話に出てくるプログラムの再現スクリプト') parser.add_argument('words', help='組み合わせる単語', nargs='*') parser.add_argument('--output', '-o', help='ファイルの出力先 設定ないなら output.txt') args = parser.parse_args()
この5行で実装できます。 add_argumentで引数の設定をしている感じです。
第一引数は受け取り先(ハイフンが入っていた場合は取り除いたもの)になります。helpはコマンドライン引数が適切でない場合に出てくる使い方を教えてくれる君のようなものです。
nargs=’*’ で可変長引数にできます。
で実際に引数を受け取る場合は
args=parser.parse_args()
で受け取ります。
その後は指定した受け取り先に応じて、args.words やargs.outputで受け取ることができます。
args.outputは中身がない場合はoutput.txt,中身入っていたらそのファイル名で出力するだけなので、特に面白くもないので説明はパスで、
args.wordsについてですがこれは順列の話で実装できます。
例えばn個の単語をを1~n個ずつ順番を作っていきたい場合は、
こうなりますね
itertoolsを作れば簡単に実装ができます。
そんで並び変えたりした組み合わせは配列なので文字列にして標準出力しましょう
後はstr_listを出力しておしまい!
str_list=[]
for el in ouput_lists:
word='\n'.join(el).replace('\n', '')
str_list.append(word)
print(word)
余談
なんかソースコードを書き表すのがうまくいかなしビジュアルエディタなのに改行入れるのにbrタグ付けなきゃいけなくなってるしで本当に使いにくくなった。 |
The Scikit-learn pipeline has a function called ColumnTransformer which allows you to easily specify which columns to apply the most appropriate preprocessing to either via indexing or by specifying the column names.
Example from post:
from sklearn import model_selection from sklearn.linear_model import LinearRegression from sklearn.datasets import fetch_openml from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder # Load auto93 data set which contains both categorical and numeric features X,y = fetch_openml("auto93", version=1, as_frame=True, return_X_y=True) # Create lists of numeric and categorical features numeric_features = X.select_dtypes(include=['int64', 'float64']).columns categorical_features = X.select_dtypes(include=['object']).columns X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, random_state=0) # Create a numeric and categorical transformer to perform preprocessing steps numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) # Use the ColumnTransformer to apply to the correct features preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) # Append regressor to the preprocessor lr = Pipeline(steps=[('preprocessor', preprocessor), ('classifier', LinearRegression())]) # Fit the complete pipeline lr.fit(X_train, y_train) print("model score: %.3f" % lr.score(X_test, y_test))
Via towards data science. |
点击上方“CVer”,选择加"星标"置顶
重磅干货,第一时间送达
本文转载自:AIWalker
paper: httsp://arxiv.org/abs/2101.03697
code: https://github.com/DingXiaoH/RepVGG
注:公众号后台回复:RepVGG,即可下载上述论文&code&预训练模型。
该文是清华大学&旷视科技等提出的一种新颖的CNN设计范式,它将ACNet的思想与VGG架构进行了巧妙的结合,即避免了VGG类方法训练所得精度低的问题,又保持了VGG方案的高效推理优点,它首次将plain模型的精度在ImageNet上提升到了超过80%top1精度。相比ResNet、RegNet、EfficientNet等网络,RepVGG具有更好的精度-速度均衡。
本文提出一种简单而强有力的CNN架构RepVGG,在推理阶段,它具有与VGG类似的架构,而在训练阶段,它则具有多分支架构体系,这种训练-推理解耦的架构设计源自一种称之为“重参数化(re-parameterization)”的技术。
在ImageNet数据集上,RepVGG取得了超过80%的top-1精度,这是plain模型首次达到如此高的精度。在NVIDIA 1080TiGPU上,RepVGG比ResNet50快83%,比ResNet101快101%,同时具有更高的精度;相比EfficientNet与RegNet,RepVGG表现出了更好的精度-速度均衡。
该文的主要贡献包含以下三个方面:
提出了一种简单有强有的CNN架构RepVGG,相比EfficientNet、RegNet等架构,RepVGG具有更佳的精度-速度均衡;
提出采用重参数化技术对plain架构进行训练-推理解耦;
在图像分类、语义分割等任务上验证了RepVGG的有效性。
简单的ConvNet具有这样三点优势:
Fast:相比VGG,现有的多分支架构理论上具有更低的Flops,但推理速度并未更快。比如VGG16的参数量为EfficientNetB3的8.4倍,但在1080Ti上推理速度反而快1.8倍。这就意味着前者的计算密度是后者的15倍。Flops与推理速度的矛盾主要源自两个关键因素:(1) MAC(memory access cose),比如多分支结构的Add与Cat的计算很小,但MAC很高; (2)并行度,已有研究表明:并行度高的模型要比并行度低的模型推理速度更快。
Memory-economical:多分支结构是一种内存低效的架构,这是因为每个分支的结构都需要在Add/Concat之前保存,这会导致更大的峰值内存占用;而plain模型则具有更好的内存高效特征。
Flexible:多分支结构会限制CNN的灵活性,比如ResBlock会约束两个分支的tensor具有相同的形状;与此同时,多分支结构对于模型剪枝不够友好。
Palin模型具有多种优势但存在一个重要的弱势:性能差。比如VGG16在ImageNet仅能达到72%的top-1指标。
本文所设计的RepVGG则是受ResNet启发得到,ResNet的ResBlock显示的构建了一个短连接模型信息流 ,当 的维度不匹配时,上述信息流则转变为 。
尽管多分支结构对于推理不友好,但对于训练友好,作者将RepVGG设计为训练时的多分支,推理时单分支结构。作者参考ResNet的identity与 分支,设计了如下形式模块:
其中, 分别对应 卷积。在训练阶段,通过简单的堆叠上述模块构建CNN架构;而在推理阶段,上述模块可以轻易转换为 形式,且 的参数可以通过线性组合方式从已训练好的模型中转换得到。
接下来,我们将介绍如何将已训练模块转换成单一的 卷积用于推理。下图给出了参数转换示意图。
我们采用 表示输入 ,输出 ,卷积核为3的卷积;采用 表示输入 ,输出 ,卷积核为1的卷积;采用 表示 卷积后的BatchNorm的参数;采用 表示 卷积后的BatchNorm的参数;采用 表示identity分支的BatchNorm的参数。假设 分别表示输入与输出,当 时,
否则,简单的采用无identity分支的模块,也就是说只有前两项。注:bn表示推理时的BN。
首先,我们可以将每个BN与其前接Conv层合并:
注:identity分支可以视作 卷积。通过上述变换,此时上述模块仅仅具有一个 卷积核,两个 卷积核以及三个bias参数。此时,三个bias参数可以通过简单的add方式合并为一个bias;而卷积核则可以将 卷积核参数加到 卷积核的中心点得到。说起来复杂,其实看一下code就非常简单了,见文末code。
前面介绍了RepVGG的核心模块设计方式,接下来就要介绍RepVGG的网络结构如何设计了。下表给出了RepVGG的配置信息,包含深度与宽度。
RepVGG是一种类VGG的架构,在推理阶段它仅仅采用 卷积与ReLU,且未采用MaxPool。对于分类任务,采用GAP+全连接层作为输出头。
对于每个阶段的层数按照如下三种简单的规则进行设计:
第一个阶段具有更大的分辨率,故而更为耗时,为降低推理延迟仅仅采用了一个卷积层;
最后一个阶段因为具有更多的通道,为节省参数量,故而仅设计一个卷积层;
在倒数第二个阶段,类似ResNet,RepVGG放置了更多的层。
基于上述考量,RepVGG-A不同阶段的层数分别为1-2-4-14-1;与此同时,作者还构建了一个更深的RepVGG-B,其层数配置为1-4-6-16-1。RepVGG-A用于与轻量型网络和中等计算量网络对标,而RepVGG-B用于与高性能网络对标。
在不同阶段的通道数方面,作者采用了经典的配置64-128-256-512。与此同时,作者采用因子 控制前四个阶段的通道,因子 控制最后一个阶段的通道,通常 (我们期望最后一层具有更丰富的特征)。为避免大尺寸特征的高计算量,对于第一阶段的输出通道做了约束 。基于此得到的不同RepVGG见下表。
为进一步降低计算量与参数量,作者还设计了可选的 组卷积替换标准卷积。具体地说,在RepVGG-A的3-5-7-...-21卷积层采用了组卷积;此外,在RepVGG-B的23-25-27卷积层同样采用了组卷积。
接下来,我们将在不同任务上验证所提方案的有效性,这里主要在ImageNet图像分类任务上进行了实验分析。
上表给出了RepVGG与不同计算量的ResNe及其变种在精度、速度、参数量等方面的对比。可以看到:RepVGG表现出了更好的精度-速度均衡,比如
RepVGG-A0比ResNet18精度高1.25%,推理速度快33%;
RepVGG-A1比Resnet34精度高0.29%,推理速度快64%;
RepVGG-A2比ResNet50精度高0.17%,推理速度快83%;
RepVGG-B1g4比ResNet101精度高0.37%,推理速度快101%;
RepVGG-B1g2比ResNet152精度相当,推理速度快2.66倍。
另外需要注意的是:RepVGG同样是一种参数高效的方案。比如:相比VGG16,RepVGG-B2b168.com仅需58%参数量,推理快10%,精度高6.57%。
与此同时,还与EfficientNet、RegNet等进行了对比,对比如下:
RepVGG-A2比EfficientNet-B0精度高1.37%,推理速度快59%;
RepVGG-B1比RegNetX-3.2GF精度高0.39%,推理速度稍快;
此外需要注意:RepVGG仅需200epoch即可取得超过80%的top1精度,见上表对比。这应该是plain模型首次在精度上达到SOTA指标。相比RegNetX-12GF,RepVGG-B3的推理速度快31%,同时具有相当的精度。
尽管RepVGG是一种简单而强有力的ConvNet架构,它在GPU端具有更快的推理速度、更少的参数量和理论FLOPS;但是在低功耗的端侧,MobileNet、ShuffleNet会更受关注。
全文到此结束,更多消融实验分析建议各位同学查看原文。
话说在半个月多之前就听说xiangyu等人把ACNet的思想与Inception相结合设计了一种性能更好的重参数化方案RepVGG,即可取得训练时的性能提升,又可以保持推理高效,使得VGG类网络可以达到ResNet的高性能。
在初次看到RepVGG架构后,笔者就曾尝试将其用于VDSR图像超分方案中,简单一试,确实有了提升,而且不需要进行梯度裁剪等额外的一些操作,赞。
从某种程度上讲,RepVGG应该是ACNet的的一种极致精简,比如上图给出了ACNet的结构示意图,它采用了 三种卷积设计;而RepVGG则是仅仅采用了 三个分支设计。ACNet与RepVGG的另外一点区别在于:ACNet是将上述模块用于替换ResBlock或者Inception中的卷积,而RepVGG则是采用所设计的模块用于替换VGG中的卷积。
最后附上作者提供的RepVGG的核心模块实现code,如下所示。
# code from https://github.com/DingXiaoH/RepVGG
class RepVGGBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size,
stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False):
super(RepVGGBlock, self).__init__()
self.deploy = deploy
self.groups = groups
self.in_channels = in_channels
assert kernel_size == 3
assert padding == 1
padding_11 = padding - kernel_size // 2
self.nonlinearity = nn.ReLU()
if deploy:
self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
else:
self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups)
self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=padding_11, groups=groups)
print('RepVGG Block, identity = ', self.rbr_identity)
def forward(self, inputs):
if hasattr(self, 'rbr_reparam'):
return self.nonlinearity(self.rbr_reparam(inputs))
if self.rbr_identity is None:
id_out = 0
else:
id_out = self.rbr_identity(inputs)
return self.nonlinearity(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)
def _fuse_bn(self, branch):
if branch is None:
return 0, 0
if isinstance(branch, nn.Sequential):
kernel = branch.conv.weight.detach().cpu().numpy()
running_mean = branch.bn.running_mean.cpu().numpy()
running_var = branch.bn.running_var.cpu().numpy()
gamma = branch.bn.weight.detach().cpu().numpy()
beta = branch.bn.bias.detach().cpu().numpy()
eps = branch.bn.eps
else:
assert isinstance(branch, nn.BatchNorm2d)
kernel = np.zeros((self.in_channels, self.in_channels, 3, 3))
for i in range(self.in_channels):
kernel[i, i, 1, 1] = 1
running_mean = branch.running_mean.cpu().numpy()
running_var = branch.running_var.cpu().numpy()
gamma = branch.weight.detach().cpu().numpy()
beta = branch.bias.detach().cpu().numpy()
eps = branch.eps
std = np.sqrt(running_var + eps)
t = gamma / std
t = np.reshape(t, (-1, 1, 1, 1))
t = np.tile(t, (1, kernel.shape[1], kernel.shape[2], kernel.shape[3]))
return kernel * t, beta - running_mean * gamma / std
def _pad_1x1_to_3x3(self, kernel1x1):
if kernel1x1 is None:
return 0
kernel = np.zeros((kernel1x1.shape[0], kernel1x1.shape[1], 3, 3))
kernel[:, :, 1:2, 1:2] = kernel1x1
return kernel
def repvgg_convert(self):
kernel3x3, bias3x3 = self._fuse_bn(self.rbr_dense)
kernel1x1, bias1x1 = self._fuse_bn(self.rbr_1x1)
kernelid, biasid = self._fuse_bn(self.rbr_identity)
return kernel3x3 + self._pad_1x1_to_3x3(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
RepVGG论文和代码下载
后台回复:RepVGG,即可下载上述硬核论文PDF和源代码、模型
CV资源下载
后台回复:CVPR2020,即可下载代码开源的论文合集
后台回复:ECCV2020,即可下载代码开源的论文合集
后台回复:YOLO,即可下载YOLOv4论文和代码
后台回复:Trasnformer综述,即可下载两个最新的视觉Transformer综述PDF,肝起来!
重磅!CVer-细分垂直交流群成立
扫码添加CVer助手,可申请加入CVer-细分垂直方向 微信交流群,也 可申请加入CVer大群,细分方向已涵盖:目标检测、图像分割、目标跟踪、人脸检测&识别、OCR、姿态估计、超分辨率、SLAM、医疗影像、Re-ID、GAN、NAS、深度估计、自动驾驶、强化学习、车道线检测、模型剪枝&压缩、去噪、去雾、去雨、风格迁移、遥感图像、行为识别、视频理解、图像融合、图像检索、论文投稿&交流、Transformer、PyTorch和TensorFlow等群。
一定要备注:研究方向+地点+学校/公司+昵称(如目标检测+上海+上交+卡卡),根据格式备注,才能通过且邀请进群
▲长按加微信群
▲长按关注CVer公众号
整理不易,请给CVer点赞和在看 ! |
Hi again
If I like to run a terminal command from my webpage, what is the best way then?
I like to have button to push if for example I wan't to restart my raspberry pi or if I like to run "wakeonlan 00:00:00:00:00:00"
If I like to run a terminal command from my webpage, what is the best way then?
I like to have button to push if for example I wan't to restart my raspberry pi or if I like to run "wakeonlan 00:00:00:00:00:00"
/Cazz
You'll want to look into writing CGI scripts. This is code that runs when a site is loaded.
For instances on my Pi I have the following fileDepending on your webserver there may be a special CGI directory where executable will live. Most systems also have a special user that runs as the webserver. This user typically does not have access to system operations like rebooting, which is by design. If someone was able to change your cgi script and run it you wouldn't want them to be able to have root privileges. But there is always ways around these restrictions, just not always safe.
For instances on my Pi I have the following file
/www/ping.cgi
Code: Select all
#!/bin/bash
echo "Ping" >> /tmp/cgi-called.log
cat << EOF
Content-type: text/html
<html>
<head>
<title>Cgi Script</title>
</head>
<body>
CGI Script ran, check out /tmp/cgi-called.log to see how many times
</body>
</html>
EOF
xmpp: jecxjo@dukgo.com
Blog: http://jecxjo.motd.org/code
Blog: http://jecxjo.motd.org/code
The placeholder page for lighttpd (/var/www/index.lighttpd.html) has the following information
That was all I needed to do to get cgi working for me.CGI scripts are looked for in /usr/lib/cgi-bin, which is where Debian packages will place their scripts. You can enable cgi module by using command "lighty-enable-mod cgi".
ok but have you edit the config file?
I have done that and now I have use "lighty-enable-mod cgi" and " /etc/init.d/lighttpd force-reload"
I did have to remove "mod_cgi" and run it again and it say "OK" but still it can't find the cgi file.
http://www.raspberrypi.org/phpBB3/viewt ... 26&t=22986
I have done that and now I have use "lighty-enable-mod cgi" and " /etc/init.d/lighttpd force-reload"
I did have to remove "mod_cgi" and run it again and it say "OK" but still it can't find the cgi file.
http://www.raspberrypi.org/phpBB3/viewt ... 26&t=22986
/Cazz
Hi Cazz,
No I didn't edit the config at all. I just made a cgi-bin directory in /var/www/ and put my script in there (I am using perl but I assume it is all the same).
I wasn't sure why they were talking about /usr/lib/cgi-bin, but I didn't want my cgi scripts in a different location to my HTML files.
No I didn't edit the config at all. I just made a cgi-bin directory in /var/www/ and put my script in there (I am using perl but I assume it is all the same).
I wasn't sure why they were talking about /usr/lib/cgi-bin, but I didn't want my cgi scripts in a different location to my HTML files.
Hey guys.
I ran into a problem. I am also trying to control my camera from webpage I'm using php to run a script or command.I can run gphoto2 commands or sh scripts. The problem is that I have a camera that needs usbreset and I can't get it to work over webpage. Everything works from command line. Has anyone solved this or can help. thx
I ran into a problem. I am also trying to control my camera from webpage I'm using php to run a script or command.
Code: Select all
<html>
<head>
<title>Output of my bash script</title>
</head>
<body>
<h1>Output of my bash script</h1>
<pre>
<?php system("gphoto2 --capture-image", $rc); ?>
</pre>
<br>
<?php echo "Return Code: {$rc}\n"; ?>
</body>
</html>
Strange
I did make a clean debian and follow this guide (I did follow this before)
http://simonthepiman.com/how_to_setup_a ... upport.php
after that I did run "lighty-enable-mod cgi" and " /etc/init.d/lighttpd force-reload"
After that I did upload the ping.cgi file and I change to +x
But it show nothing, just a plain blank screen.
I even look inside the log "/tmp/cgi-called.log" nothing there.
I did make a clean debian and follow this guide (I did follow this before)
http://simonthepiman.com/how_to_setup_a ... upport.php
after that I did run "lighty-enable-mod cgi" and " /etc/init.d/lighttpd force-reload"
After that I did upload the ping.cgi file and I change to +x
But it show nothing, just a plain blank screen.
I even look inside the log "/tmp/cgi-called.log" nothing there.
/Cazz
I would try a simple cgi script first (e.g. http://perl.about.com/od/perltutorials/a/hellocgi.htm) before trying more complicated programs/scripts. One issue may be that the lighttpd demon is run as user www-data. So all the cgi-bin scripts are also run as user www-data, which could mean you are possibly running into permission problems.
Hmm strange
I have change the owner of cgi file and even the group owner of the file but still does not show anything.
I wonder if that have to do that PHP run mod_fastcgi, what I understand lighttpd run PHP as CGI?
I have change the owner of cgi file and even the group owner of the file but still does not show anything.
I wonder if that have to do that PHP run mod_fastcgi, what I understand lighttpd run PHP as CGI?
/Cazz
Hi Cazz,
I didn't mean the read permissions on the files, but permission to perform some operations. For example the command to get the cpu (?) temperature on the pi
/opt/vc/bin/vcgencmd measure_temp
fails when run as user www-data.
However, I should have read the thread again. Now I understand the ping.cgi script (I thought it was some sort of script that was trying to ping a host ... silly me).
Anyway, I tried the ping.cgi script myself. It turn out that it I put the script in /var/www my browser wants to download the script. That is lighttpd doesn't execute it. However if I move the script to /var/www/cgi-bin/ it works.
I didn't mean the read permissions on the files, but permission to perform some operations. For example the command to get the cpu (?) temperature on the pi
/opt/vc/bin/vcgencmd measure_temp
fails when run as user www-data.
Code: Select all
pi@raspberrypi ~ $ sudo su - www-data
$ /opt/vc/bin/vcgencmd measure_temp
VCHI initialization failed
Anyway, I tried the ping.cgi script myself. It turn out that it I put the script in /var/www my browser wants to download the script. That is lighttpd doesn't execute it. However if I move the script to /var/www/cgi-bin/ it works.
ok but I don't think it is that problem.
I have even try "Hello world" script and nothing happend.
It does not say anything, no error, no download just a plain white page.
The only thing I have add in my conf file is so I can run PHP.
I wonder if that is the problem
I have even try "Hello world" script and nothing happend.
It does not say anything, no error, no download just a plain white page.
The only thing I have add in my conf file is so I can run PHP.
I wonder if that is the problem
Code: Select all
server.modules = (
"mod_access",
"mod_alias",
"mod_compress",
"mod_fastcgi",
"mod_redirect",
# "mod_rewrite",
)
server.document-root = "/var/www"
server.upload-dirs = ( "/var/cache/lighttpd/uploads" )
server.errorlog = "/var/log/lighttpd/error.log"
server.pid-file = "/var/run/lighttpd.pid"
server.username = "www-data"
server.groupname = "www-data"
server.port = 80
index-file.names = ( "index.php", "index.html", "index.lighttpd.html" )
url.access-deny = ( "~", ".inc" )
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".cgi" )
compress.cache-dir = "/var/cache/lighttpd/compress/"
compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" )
# default listening port for IPv6 falls back to the IPv4 port
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
include_shell "/usr/share/lighttpd/create-mime.assign.pl"
include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
fastcgi.server = ( ".php" => ((
"bin-path" => "/usr/bin/php-cgi",
"socket" => "/tmp/php.socket"
)))
/Cazz
hi .. i am trying to run this command on a web page but i got a blank page .
[/color][/color]
i have attached the image which shows when in run above command in terminal of my raspberry pi.
Code: Select all
sudo /home/pi/sources/Adafruit_Python_DHT/examples/AdafruitDHT.py 2302 4
i have attached the image which shows when in run above command in terminal of my raspberry pi.
Attachments
command.PNG (12.04 KiB) Viewed 8901 times |
Soy realmente nuevo en Django. El problema es que no puedo cargar mi plantilla que consta de dos archivos html básicos. Aquí está la ubicación de mi archivo de plantilla:
/ home / usman / Django Project / django-black / luckdrum / templates /
Plantilla incluye y django views / urls. ¿Cómo (debería / debería) funcionar?
Django - Modelo de visualización de plantillas verbose_names y objetos
Plantilla Django, enviar dos argumentos a la etiqueta de la plantilla?
Plantilla Django sobre cómo buscar un valor de diccionario con una variable.
¿Cómo contar el número de objetos detectados con la coincidencia de plantillas?
Aquí está mi función de vista:
from django.shortcuts import render from django.http import HttpResponse from django.template.loader import get_template from django.template import Context def hello_template(request): t=get_template('signup.html') return HttpResponse(t)
Aquí está el archivo url.py:
from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^hello/','blog.views.hello'), url(r'^signup.html/','blog.views.hello_template'), ]
También he agregado la ruta en mi configuración.py como TEMPLATE_DIRS. El servidor muestra un error porque la plantilla no existe. ¡Por favor, ayúdame!
Coloque sus plantillas en las templates/ o . Serán encontrados automáticamente por Django.
Django buscará en la carpeta de templates/ principales templates/ y después de .
Y luego en su opinión:
from django.shortcuts import render def hello_template(request): return render(request, '/signup.html')
A continuación se muestra el aspecto que debería tener su proyecto Django ( según lo recomienda Django para escribir aplicaciones reutilizables ):
mysite/ manage.py mysite/ __init__.py settings.py urls.py wsgi.py polls/ __init__.py admin.py migrations/ __init__.py 0001_initial.py models.py static/ polls/ images/ background.gif style.css templates/ polls/ detail.html index.html results.html tests.py urls.py views.py templates/ admin/ base_site.html
Lea la documentación de django: Organice plantillas y cómo escribir aplicaciones reutilizables |
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: mayaUndoExample
Description:
two undo examples
one on how to use the undo stack as decorator function
and as inside function
'''
from maya import cmds
import traceback
import pymel.core as pm
def undo( function ):
'''
This is a function decorator.
you can use it by writing @undo one line obove any function.
before the function gets called an undo chunk is started.
when the function ends it gets closed.
Be carefull, if you call the function in itself (recursively) it will break the undo stack.
:param function:
:return:
'''
def funcCall(*args,**kwargs):
result = None
try:
## here we open the undo chunk and we give it the name of the fuction
cmds.undoInfo( openChunk= True, chunkname = function.__name__ )
result = function( *args,**kwargs )
except Exception as e:
## If we have an error we will print the stack of the error
print traceback.format_exc()
## we also make sure the maya ui shows an error.
pm.displayError( "## Error, see script editor: %s"%e )
finally:
## we always need to close the chunk at the end else we corrupt the stack.
cmds.undoInfo( closeChunk = True )
return result
return funcCall
@undo
def simpleExampleWithDecorator():
'''
So here we have the decorator defined above the function definition.
So before this function is called, an undo chunk is created and closed after the function has finished.
:return:
'''
for i in xrange( 10 ):
loc = cmds.createNode( "spaceLocator" )
cmds.xform(loc, translation=(i,i,i))
simpleExampleWithDecorator()
def undoInFunction( ):
## its recomended to always use a try and except with an undo chunk else you need to restart maya when it fails.
## here we open the undo chunk and we give it the name of the fuction
cmds.undoInfo(openChunk=True, chunkname="Example")
try:
for i in xrange(10):
loc = cmds.createNode("spaceLocator")
cmds.xform(loc, translation=(i, i, i) )
except Exception as e:
## If we have an error we will print the stack of the error
print traceback.format_exc()
## we also make sure the maya ui shows an error.
pm.displayError("## Error, see script editor: %s" % e)
finally:
cmds.undoInfo(openChunk=True, chunkname="Example")
undoInFunction()
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: widgetUnderMouse
Description:
prints and highlights the widget under your mouse.
note that this is supppaaaaaaa hacky! And it will make any qt ui flash and slow down a lot!
But its helpfull when you want to find out what an object is called. I used this a lot in Maya to hack into the UI and add custom widgets.
If you click with the mouse buttons the overlay will stop.
'''
import sys
## Simple pyside 2 or 1 import check.
try:
PYSIDE_VERSION = 2
from PySide2.QtWidgets import *
from PySide2.QtGui import *#QFont, QIcon, QPixmap, QColor
from PySide2.QtCore import *
from PySide2.QtUiTools import *
from pyside2uic import compileUi
except:
from PySide.QtCore import *
from PySide.QtGui import *
def widgets_at(pos, topOnly = False ):
"""Return ALL widgets at `pos`
It uses the WA_TransparentForMouseEvents trick to find the underlying widgets.
Arguments:
pos (QPoint): Position at which to get widgets
"""
widgets = []
## Ask QT what widget is at this position
widget_at = QApplication.widgetAt(pos)
if topOnly:
return [widget_at]
while widget_at:
widgets.append(widget_at)
## Make widget invisible, so the next time we call the widgetAt function
## QT will return the underlying widget.
widget_at.setAttribute(Qt.WA_TransparentForMouseEvents)
widget_at = QApplication.widgetAt(pos)
# Restore attributes else nothing will respond to mouse clicks anymore.
for widget in widgets:
widget.setAttribute(Qt.WA_TransparentForMouseEvents, False)
return widgets
class Overlay(QWidget):
def __init__(self, parent=None):
'''
This is an overlay that sits across the entire UI.
This way its easier to track the mouse position and interact with the widgets below it.
:param parent:
'''
super(Overlay, self).__init__(parent)
self.setAttribute(Qt.WA_StyledBackground)
self.setStyleSheet("QWidget { background-color: rgba(0, 255, 0, 0) }")
self.setMouseTracking(True)
self._widgetsUnderMouse = set()
def mouseMoveEvent(self, event):
'''
For every 'pixel' we move our mouse this function is called.
:param event:
:return:
'''
## query the current position
pos = QCursor.pos()
## Find the widgets below the cursor.
currentWidgets = set( [ widgets_at(pos)[1] ] )
## If we have found new widgets.
if currentWidgets != self._widgetsUnderMouse:
## Remove the old outline of the widgets we had before
self._removeOutline(self._widgetsUnderMouse)
## Add a new outline to our new widgets
self._addOutline(currentWidgets)
## Print all widgets we have under our mouse now.
for w in currentWidgets:
n = w.objectName()
print "Name: ",n, "Widget: ", w
self._widgetsUnderMouse = currentWidgets
## Let qt do the rest of its magic.
return super(Overlay, self).mouseMoveEvent(event)
def mousePressEvent( self, event ):
'''
If we click with the left mouse button the overlay stops.
:param event:
:return:
'''
self.deleteLater()
return super(Overlay, self).mousePressEvent(event)
def _addOutline( self, wList ):
for w in wList:
n = w.objectName()
## SUUUPER HACK TRICK
## We force an object name on the object
w.setObjectName("AAAAAAA")
## Make the object have a red outline with a stylesheet
w.setStyleSheet('QWidget#AAAAAAA {border: 4px solid red;outline-offset: -2px;}')
## Restore the object name
w.setObjectName(n)
def _removeOutline( self, wList):
'''
Not the best idea because we remove all style info,
actually we should store the style sheet info before setting the outline buuuuuttt you get the idea :D
:param wList:
:return:
'''
for w in wList:
w.setStyleSheet("")
def _clearAll(self):
self._removeOutline(self._widgetsUnderMouse)
def __del__(self):
self._clearAll()
self._widgetsUnderMouse = set()
def get_maya_window():
for widget in QApplication.allWidgets():
try:
if widget.objectName() == "MayaWindow":
return widget
except:
pass
return None
window = get_maya_window()
app = None
if not window:
'''
If we are not in Maya, we just make an example window
'''
app = QApplication(sys.argv)
window = QWidget()
window.setObjectName("Window")
window.setFixedSize(200, 100)
button = QPushButton("Button 1", window)
button.setObjectName("Button 1")
button.move(10, 10)
button = QPushButton("Button 2", window)
button.setObjectName("Button 2")
button.move(50, 15)
overlay = Overlay(window)
overlay.setObjectName("Overlay")
overlay.setFixedSize(window.size())
overlay._clearAll()
overlay.show()
if app:
window.show()
app.exec_()
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: qtSettingExample
Description:
Simple example on how to use qt settings.
The first time you run it, it will say you dint have a last opened project, and set an example path
the second time it will print the value of the example path.
'''
from PySide.QtCore import QSettings ## pip install PySide
if __name__ =="__main__":
settings = QSettings("Company Name", "Tool Name")
## Check if the value already exsited
stored = settings.value("lastOpenedProject")
if stored:
print "We have a last opened project: ", stored
else:
print "We dint have a last opened project. Setting the example path."
## Saven die bende
settings.setValue("lastOpenedProject", "C:/some/example.path")
print "Press enter to exit"
raw_input()
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: showQtWidgetHierarchy
Description:
In maya you sometimes want to hack the UI.
Seeng as the UI is made with QT you can hack into it by finding the right node and or widget name.
To make this process easier you can run this script and it will show you the entire hierarchy and return it as a dict
'''
import json
import sys
from PySide.QtGui import * ## pip install PySide
def widgets_recursive(d, widget = None, doPrint =False ):
if not widget:
for widget in QApplication.topLevelWidgets():
get_widget(widget, d, 0, doPrint)
else:
get_widget(widget, d, 0, doPrint)
def get_widget(w,d, depth = 0, doPrint=False):
'''
Recursively searches through all widgets down the tree and prints if desired.
:param w: the widget to search from
:param d: the dictionary to add it to
:param depth: current depth we are at
:param doPrint: if we need to print
:return:
'''
n = w.objectName()
n = n if n else str(w)
if doPrint: print "\t"*depth, n
newD = {}
for widget in w.children():
get_widget(widget, newD, depth +1 )
d[n] = newD
def get_widget_from_name(name):
for widget in QApplication.allWidgets():
try:
if name in widget.objectName() :
return widget
except:
pass
return None
if __name__ =="__main__":
## Remove this block if you are running this in maya or something.
## Here we make a simple QWindow with a layout and button.
app = QApplication(sys.argv)
wid = QWidget()
wid.setObjectName("myWindow")
button = QPushButton()
button.setObjectName("This is my button, there are many like it but this one is mine.")
lay = QHBoxLayout()
lay.addWidget(button)
wid.setLayout(lay)
wid.show()
## Create a simpe dict to hold all the data in the ned.
widgetHierarchyDict = {}
## if you have no idea where to start. Just leave the topWidget argument to None
## But if you are in a QT based aplication like Maya you can also start from a widget you know the name of to speed
## up the process
widgetObjectName = None ## "graphEditor1Window"
topWidget = get_widget_from_name(widgetObjectName)
## Recurse over all widgets and store all the information in the provided dict.
widgets_recursive(widgetHierarchyDict, topWidget)
## Print it with json so its nice and clear.
print json.dumps(widgetHierarchyDict, sort_keys=True, indent = 2 )
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: findSignalsAndSlotsQt
This script gives you all available signals and slots on a qt widget object.
Normally you can just check the documentation, however if custom signals and slots are used its hard to find them.
We do this by using the meta class from the object.
I used this to find the timechanged event on the maya timeControl widget.
'''
import sys
from PySide.QtGui import * ## pip install PySide
from PySide import QtCore
def get_widget(name):
'''
Kind of slow method of finding a widget by object name.
:param name:
:return:
'''
for widget in QApplication.allWidgets():
try:
if name in widget.objectName():
return widget
except Exception as e:
print e
pass
return None
def test( *arg, **kwarg):
'''
Simple test function to see what the signal sends out.
:param arg:
:param kwarg:
:return:
'''
print "The args are: ", arg
print "The kwargs are: ", kwarg
print
if __name__ == "__main__":
## Here we make a simple QLineEdit for argument sake ...
app = QApplication(sys.argv)
wid = QLineEdit()
wid.setObjectName("myLineEdit")
wid.show()
## Find the widget by name.
## See the qt ui list hierarchy script to find all widgets in a qt ui.
widgetObjectName = "myLineEdit"
widgetObject = get_widget(widgetObjectName)
if not widgetObject:
raise Exception("Could not find widget: %s" %widgetObjectName)
## Sanity check
if not wid == widgetObject:
raise Exception("Should not happen.XD")
## Get the meta data from this object
meta = widgetObject.metaObject()
## Itterate over the number of methods available
for methodNr in xrange(meta.methodCount()):
method = meta.method(methodNr)
## If the method is a signal type
if method.methodType() == QtCore.QMetaMethod.MethodType.Signal:
## Print the info.
print
print "This is the signal name", method.signature()
print "These are the signal arguments: ", method.parameterNames()
## If the method is a signal type
if method.methodType() == QtCore.QMetaMethod.MethodType.Slot:
## Print the info.
print
print "This is the slot name", method.signature()
print "These are the slot arguments: ", method.parameterNames()
'''
output example:
...
This is the signal name textChanged(QString)
These are the signal arguments: [PySide.QtCore.QByteArray('')]
This is the signal name textEdited(QString)
These are the signal arguments: [PySide.QtCore.QByteArray('')]
...
so now you can do
widgetObject.textChanged.connect(test)
any every time the text changes the 'test' function will be called
'''
widgetObject.textChanged.connect(test)
sys.exit(app.exec_())
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: getProcessId
Description:
script to get all running processes from tasklist
alternatively you can use wmic for the commandline parameters
i.e. wmic process where caption="bla.exe" get commandline
'''
import json
import subprocess
def get_pdata(name = None):
'''
{'Session Name': 'Console',
'Mem Usage': '7',
'PID': '15888',
'Image Name': 'conhost.exe',
'Session#': '1'}
'''
pdata = []
## create a startup info so we can hide the window popup.
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = subprocess.SW_HIDE
## run the system command to gte all task info in csv format
task_list = subprocess.check_output(['tasklist', '/fo', 'csv'], startupinfo = startupinfo).replace('"','')
taskLines = task_list.splitlines()
dictHeaders =taskLines.pop(0).split(",")
## Itterate over all tasks
for task_line in taskLines:
taskDict = {taskData[0]:taskData[1] for taskData in zip(dictHeaders, task_line.split(','))}
## Filter on name if so desired
if name:
if name.lower() in taskDict["Image Name"].lower():
pdata.append(taskDict)
else:
pdata.append(taskDict)
return pdata
data = get_pdata()
print json.dumps(data, indent =2, sort_keys=True)
I frequently have to sync audio from a boom mic or external mic and camera audio, and without having a clapboard this is quite tedious (even with a clapboard this is tedious XD) hence this script.
See the attached zipped example files for use with this script.
See http://www.fon.hum.uva.nl/praat/download_win.html
To download the praat tool.
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: find the offset of one wav file to the other.
Description:
This is a common problem with video and audio recording.
Where the sound recording does not start at exactly the same time as the video.
So I use this script to sync the two audio tracks.
It uses a wonderful third party tool called Praat
http://www.fon.hum.uva.nl/praat/
praat license gnu
http://www.fon.hum.uva.nl/praat/GNU_General_Public_License.txt
Note: I recommend converting the wav files to 16 bit 16khz.
That seems to have the best result. else praat will have issues
Note the script only uses the first 30 seconds to compare.
If you reduce this time then the script becomes faster.
If you want to modify the duration edit this line in the .praat script.
Extract part: 0, 30, "no"
If you want to compare the entire wav file change this to...
Extract part: 0, 3000000000000000, "no"
or something like that XD
'''
import subprocess
praatExe = r"Praat.exe"
praatScript = r"findWavOffset.praat"
wavFileA =r"bensound-summer.wav"
wavFileB =r"bensound-summer_offset.wav"
## Construct the command
praat_command = '{} --run {} {} {}'.format(
praatExe, praatScript, wavFileA, wavFileB)
print "Starting praat command please wait a few seconds..."
sound_offset_time = subprocess.check_output( praat_command, shell=True).decode("utf-16")
print "The offset from file B to A is:"
print sound_offset_time ## should be like: 28.750839181733134
print "enter to exit"
raw_input()
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: scipy wav file
Description:
Usually not a big fan of scipy and numpy cause of no Maya compatability.
But still usefull as an example
'''
from scipy.io import wavfile
def trim_wav( originalWavPath, newWavPath , start, end ):
'''
:param originalWavPath: the path to the source wav file
:param newWavPath: output wav file * can be same path as original
:param start: time in seconds
:param end: time in seconds
:return:
'''
sampleRate, waveData = wavfile.read( originalWavPath )
startSample = int( start * sampleRate )
endSample = int( end * sampleRate )
wavfile.write( newWavPath, sampleRate, waveData[startSample:endSample])
wp = r"pathToWav.wav"
trim_wav(wp, wp.replace(".wav", "_trimmed.wav"), 0,10)
I spend an unreasonable amount of time trying to figure this out. How to pack your data so you can send it across a TCP connection, receive it on the other side, know what data you received and unpack it accordingly. Hence sharing it, hopefully its useful to someone out there ^_^
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: bytesForNetworkTransfer
Description:
byte tests for data transmission across a network
Simple example on how to create a data packet for sending across a tcp network connection.
'''
import sys
import struct
def create_packet( message, type = 100 ):
'''
This packs our string message into a packet that we can send across a network.
The type makes it easier to know how to handle the data on the receiving end.
For example, type 100 means normal text data
but type 200 can mean jpg image data.
the way we construct our packet is as follows.
[size][type][message]
the first part will say how big our message is.
the second part will hold the type number
the third part will be the message.
:param message: a byte string message.
:param type: the message type. This can be any number between 0 and 255.
:return:
'''
if not isinstance(message, basestring):
raise TypeError("The message is not a string. ")
try:
## Lets make sure we can encode the data as utf8 and back
decodedMessage = messageContents.decode("utf8") ## data is now unicode
encodedMessage = decodedMessage.encode("utf8") ## data is now back to binary.
except UnicodeDecodeError as e:
print "This message can not be decoded as UTF8. There are characters in there utf8 doesnt understand."
raise e
## We use the sys getsizeof function to find out how much bytes this string takes in memory.
## using len would give us the number of characters, but not actually the number of bytes.
messageSize = sys.getsizeof(message)
## Lets start packing this.
## For this we use the struct module.
## https://docs.python.org/2/library/struct.html
## First we write the size.
## >I means unsigned int. so a maximum value of 65,535
## if you expect bigger messages you should use >Q
byteString = struct.pack(">I", messageSize)
## Then we write the type
## >B is an unsigned char. it allows for 256 different types.
byteString += struct.pack(">B", type)
## Now we add our contents.
byteString += messageContents
return byteString
def read_packet( incommingMessage ):
'''
Here we decode our message.
There are a few interesting caveats with network data transmission.
In this case I will assume a tcp network connection.
That means that "in theory" all the data comes in, in order.
However you can not always be sure that you received exactly one full message.
So if we do not have enough data, this function returns [None, remaining data]
if we have exactly the right amount of data [extracted message, ""]
if we have more data left [extracted message, remaining data]
Because we know how we wrote our create packet function,
we know that the message size is unsigned int( >I ) which is 4 bytes long
*see the table in the struct python docs
And the same goes for the type which is 1 byte long.
so we use that information here.
:param incommingMessage: the message as we received it from the network.
:return: list [the message content, the remaining data]
'''
if len( incommingMessage ) < 4:
return [None, incommingMessage]
sizeBytes = incommingMessage[:4]
unpackedMessageSize = struct.unpack(">I", sizeBytes)[0]
typeBytes = incommingMessage[4]
unpackedMessageType = struct.unpack(">B", typeBytes)[0]
unpackedMessage = incommingMessage[5:unpackedMessageSize]
## Now we could do something with our data and handle it
## according to the unpackedMessageType.
## i.e.
## if unpackedMessageType == 200:
## unpackedMessage = bytes_to_image(unpackedMessage)
return [unpackedMessage, incommingMessage[unpackedMessageSize+5:]]
## If we do not specify encoding then python will store this string as byte code.
## using japanese characters so we cover some wierd edge cases.
messageContents = "textAnd漢字カタカナ"
## Create the packet ready for sending.
networkPacket = create_packet(messageContents)
## Decode it once we have received it.
decodedMessage, remainingPacketData = read_packet(networkPacket)
print "Original: ", repr(messageContents)
print "Decoded : ", repr(decodedMessage)
assert(messageContents == decodedMessage)
print "All tests passed"
# -*- coding: utf-8 -*-
## Copyright 2019 Trevor van Hoof and Jan Pijpers.
## Licensed under the Apache License, Version 2.0
## Downloaded from https://janpijpers.com or https://gumroad.com/janpijpers
## See the license file attached or on https://www.janpijpers.com/script-licenses/
'''
Name: zipFileExample
Description:
Simple zipfile example
'''
import zipfile
zipFilePath = r"C:\archive.zip"
with zipfile.ZipFile(zipFilePath, "r") as zFile:
## Print all files in filenames.
for f in zFile.infolist():
print f.filename
## Read a file
print zFile.read("directory/file_name.extension") |
While the process for adding Prometheus metrics to a Python application is well documented in the prometheus_client documentation, dealing with adding metrics when you only know what the metric name or labels are going to be at runtime is trickier. Normal metric classes expect to be declared at module level so the default collector can pick them up. The documentation hints at a solution however. Use a Custom Collector.
The maintainer of the python client library has already done an excellent write-up on how to use custom collectors to take data from existing systems and create an exporter with them. The article (on extracting Jenkins job information) is here: https://www.robustperception.io/writing-a-jenkins-exporter-in-python
This article will describe how I took a Django application I wrote to store information on service level agreements, and expose component service window information as metrics to the application’s own metrics endpoint (Implemented with the excellent django-prometheus package).
Implementation
To add a Custom collector to a Django application, you will need to do three things:
Have a model or models that supply data you want to turn into metrics.
Write the collector class.
Register the class with the prometheus client global registry ONCE ONLY, and make sure this happens AFTER the database has initialised, and only when the django app is actually running. This last part is probably the part that caused me the most grief.
Assuming you’ve already carried out step one, this is how you go about steps 2 and 3:
Step 2: Write the collector
A collector class is a class that implements the ‘collect’ method. The ‘collect’ method is a generator, that yields <type>MetricFamily objects, where <type> can be a Counter, GaugeHistogram, Gauge, Histogram, Info, StateSet, Summary, Unknown, or Untyped metric type.
Example (monitoring.py)
from prometheus_client.core import GaugeMetricFamily
from django.utils import timezone
from .models import Component
SERVICE_WINDOW_LAST_START_METRIC = 'service_window_last_start'
SERVICE_WINDOW_LAST_START_DOC = 'Last start time of the service window'
SERVICE_WINDOW_LAST_END_METRIC = 'service_window_last_end'
SERVICE_WINDOW_LAST_END_DOC = 'Last end time of the service window'
SERVICE_WINDOW_NEXT_START_METRIC = 'service_window_next_start'
SERVICE_WINDOW_NEXT_START_DOC = 'Next start time of the service window'
SERVICE_WINDOW_NEXT_END_METRIC = 'service_window_next_end'
SERVICE_WINDOW_NEXT_END_DOC = 'Next end time of the service window'
SERVICE_WINDOW_IN_WINDOW_METRIC = 'service_window_in_window'
SERVICE_WINDOW_IN_WINDOW_DOC = 'Is the service window active (1 for yes, 0 for no)'
class ComponentCollector(object):
def collect(self):
moment = timezone.now()
components = Component.objects.all()
metrics = {}
for component in components:
labels = component.get_labels()
prefix = component.name.replace('-', '_') + "_"
metrics[component.name] = {
'last_start': GaugeMetricFamily(''.join( (prefix, SERVICE_WINDOW_LAST_START_METRIC)),
SERVICE_WINDOW_LAST_START_DOC, labels=labels.keys()),
'last_end': GaugeMetricFamily(''.join( (prefix, SERVICE_WINDOW_LAST_END_METRIC)),
SERVICE_WINDOW_LAST_END_DOC, labels=labels.keys()),
'next_start': GaugeMetricFamily(''.join( (prefix, SERVICE_WINDOW_NEXT_START_METRIC)),
SERVICE_WINDOW_NEXT_START_DOC, labels=labels.keys()),
'next_end': GaugeMetricFamily(''.join( (prefix, SERVICE_WINDOW_NEXT_END_METRIC)),
SERVICE_WINDOW_NEXT_END_DOC, labels=labels.keys()),
'in_window': GaugeMetricFamily(''.join( (prefix, SERVICE_WINDOW_IN_WINDOW_METRIC)),
SERVICE_WINDOW_IN_WINDOW_DOC, labels=labels.keys()),
}
metrics[component.name]['last_start'].add_metric(labels=labels.values(),
value=component.get_last_start_time(moment).timestamp())
metrics[component.name]['last_end'].add_metric(labels=labels.values(),
value=component.get_next_end_time(moment).timestamp())
metrics[component.name]['next_start'].add_metric(labels=labels.values(),
value=component.get_next_start_time(moment).timestamp())
metrics[component.name]['next_end'].add_metric(labels=labels.values(),
value=component.get_next_end_time(moment).timestamp())
metrics[component.name]['in_window'].add_metric(labels=labels.values(),
value=int(component.in_window(moment)))
for comp in metrics.keys():
for metric in metrics[comp].values():
yield metric
In this example, I’ve taken a Component model, that exposes the service window last and next start & end times, plus indicates if the current time is in a service window for the component. The metrics:
<component_name>_service_window_last_start
<component_name>_service_window_last_end
<component_name>_service_window_next_start
<component_name>_service_window_next_end
<component_name>_service_window_in_window
are created, and the labels added to the component are added as metric labels to the metrics.
The <type>MetricFamily class does the rest of the work. The default prometheus registry class will run the collect once to store the metric definitions, then run collect to obtain updated metric values on each scrape.
Step 3: Registering the collector
This involves some Django trickery in the app.py module of your project.
You will need to do the following:
Write a migration hook to register if you are running a migration instead of the actual application.
Write another hook to register when you’ve connected to the database.
Register both hooks in the AppConfig ready method.
Register your Collector class with the prometheus registry the first time the database connection hook fires ONLY.
Example (apps.py)
from django.apps import AppConfig
from django.db.models.signals import post_migrate
from django.db.backends.signals import connection_created
from prometheus_client import REGISTRY
migration_executed = False
monitoring_initialised = False
def post_migration_callback(sender, **kwargs):
global migration_executed
logger.info('Migration executed')
migration_executed = True
def connection_callback(sender, connection, **kwargs):
global monitoring_initialised
# Check to see if we are not running a unittest temp db
if not connection.settings_dict['NAME'] == 'file:memorydb_default?mode=memory&cache=shared':
if not monitoring_initialised:
from .monitoring import ComponentCollector
REGISTRY.register(ComponentCollector())
monitoring_initialised = True
class ComponentSlaMonitorConfig(AppConfig):
name = 'component_sla_monitor'
def ready(self):
global migration_executed
post_migrate.connect(post_migration_callback, sender=self)
if not migration_executed:
connection_created.connect(connection_callback)
Note that we only import the Collector in the connection_callback hook. This is because importing at the top of the file will cause django database errors.
Also, note the check to see if the DB connection is with an in-memory database. This is to disable monitoring registration during unit tests.
This code is based on Django 2.2. The ready method, and some of the hooks have only been available since Django 1.7 |
If you’ve been writing JavaScript for some time now, it’s almost certain you’ve written some scripts dealing with the Document Object Model (DOM). DOM scripting takes advantage of the fact that a web page opens up a set of APIs (or interfaces) so you can manipulate and otherwise deal with elements on a page.
But there’s another object model you might want to become more familiar with: The CSS Object Model (CSSOM). Likely you’ve already used it but didn’t necessarily realize it.
In this guide, I’m going to go through many of the most important features of the CSSOM, starting with stuff that’s more commonly known, then moving on to some more obscure, but practical, features.
What is the CSSOM?
The CSS Object Model is a set of APIs allowing the manipulation of CSS from JavaScript. It is much like the DOM, but for the CSS rather than the HTML. It allows users to read and modify CSS style dynamically.
MDN’s info is based on the official W3C CSSOM specification. That W3C document is a somewhat decent way to get familiar with what’s possible with the CSSOM, but it’s a complete disaster for anyone looking for some practical coding examples that put the CSSOM APIs into action.
MDN is much better, but still largely lacking in certain areas. So for this post, I’ve tried to do my best to create useful code examples and demos of these interfaces in use, so you can see the possibilities and mess around with the live code.
As mentioned, the post starts with stuff that’s already familiar to most front-end developers. These common features are usually lumped in with DOM scripting, but they are technically part of the larger group of interfaces available via the CSSOM (though they do cross over into the DOM as well).
Inline Styles via element.style
The most basic way you can manipulate or access CSS properties and values using JavaScript is via the style object, or property, which is available on all HTML elements. Here’s an example:
document.body.style.background = 'lightblue';
Most of you have probably seen or used that syntax before. I can add to or change the CSS for any object on the page using that same format: element.style.propertyName.
In that example, I’m changing the value of the background property to lightblue. Of course, background is shorthand. What if I want to change the background-color property? For any hyphenated property, just convert the property name to camel case:
document.body.style.backgroundColor = 'lightblue';
In most cases, a single-word property would be accessed in this way by the single equivalent word in lowercase, while hyphenated properties are represented in camel case. The one exception to this is when using the float property. Because float is a reserved word in JavaScript, you need to use cssFloat (or styleFloat if you’re supporting IE8 and earlier). This is similar to the HTML for attribute being referenced as htmlFor when using something like getAttribute().
Here’s a demo that uses the style property to allow the user to change the background color of the current page:
So that’s an easy way to define a CSS property and value using JavaScript. But there’s one huge caveat to using the style property in this way: This will only apply to inline styles on the element.
This becomes clear when you use the style property to read CSS:
document.body.style.backgroundColor = 'lightblue';
console.log(document.body.style.backgroundColor);
// "lightblue"
In the example above, I’m defining an inline style on the <body> element, then I’m logging that same style to the console. That’s fine. But if I try to read another property on that element, it will return nothing — unless I’ve previously defined an inline style for that element in my CSS or elsewhere in my JavaScript. For example:
console.log(document.body.style.color);
// Returns nothing if inline style doesn't exist
This would return nothing even if there was an external stylesheet that defined the color property on the <body> element, as in the following CodePen:
Using element.style is the simplest and most common way to add styles to elements via JavaScript. But as you can see, this clearly has some significant limitations, so let’s look at some more useful techniques for reading and manipulating styles with JavaScript.
Getting Computed Styles
You can read the computed CSS value for any CSS property on an element by using the window.getComputedStyle() method:
window.getComputedStyle(document.body).background;
// "rgba(0, 0, 0, 0) none repeat scroll 0% 0% / auto padding-box border-box"
Well, that’s an interesting result. In a way, window.getComputedStyle() is the style property’s overly-benevolent twin. While the style property gives you far too little information about the actual styles on an element, window.getComputedStyle() can sometimes give you too much.
In the example above, the background property of the <body> element was defined using a single value. But the getComputedStyle() method returns all values contained in background shorthand. The ones not explicitly defined in the CSS will return the initial (or default) values for those properties.
This means, for any shorthand property, window.getComputedStyle() will return all the initial values, even if none of them is defined in the CSS:
Similarly, for properties like width and height, it will reveal the computed dimensions of the element, regardless of whether those values were specifically defined anywhere in the CSS, as the following interactive demo shows:
Try resizing the parent element in the above demo to see the results. This is somewhat comparable to reading the value of window.innerWidth, except this is the computed CSS for the specified property on the specified element and not just a general window or viewport measurement.
There are a few different ways to access properties using window.getComputedStyle(). I’ve already demonstrated one way, which uses dot-notation to add the camel-cased property name to the end of the method. You can see three different ways to do it in the following code:
// dot notation, same as above
window.getComputedStyle(el).backgroundColor;
// square bracket notation
window.getComputedStyle(el)['background-color'];
// using getPropertyValue()
window.getComputedStyle(el).getPropertyValue('background-color');
The first line uses the same format as in the previous demo. The second line is using square bracket notation, a common JavaScript alternative to dot notation. This format is not recommended and code linters will warn about it. The third example uses the getPropertyValue() method.
The first example requires the use of camel casing (although in this case both float and cssFloat would work) while the next two access the property via the same syntax as that used in CSS (with hyphens, often called “kebab case”).
Here’s the same demo as the previous, but this time using getPropertyValue() to access the widths of the two elements:
Getting Computed Styles of Pseudo-Elements
One little-known tidbit about window.getComputedStyle() is the fact that it allows you to retrieve style information on pseudo-elements. You’ll often see a window.getComputedStyle() declaration like this:
window.getComputedStyle(document.body, null).width;
Notice the second argument, null, passed into the method. Firefox prior to version 4 required a second argument, which is why you might see it used in legacy code or by those accustomed to including it. But it’s not required in any browser currently in use.
That second optional parameter is what allows me to specify that I’m accessing the computed CSS of a pseudo-element. Consider the following CSS:
.box::before {
content: 'Example';
display: block;
width: 50px;
}
Here I’m adding a ::before pseudo-element inside the .box element. With the following JavaScript, I can access the computed styles for that pseudo-element:
let box = document.querySelector('.box');
window.getComputedStyle(box, '::before').width;
// "50px"
You can also do this for other pseudo-elements like ::first-line, as in the following code and demo:
let p = document.querySelector('.box p');
window.getComputedStyle(p, '::first-line').color;
And here’s another example using the ::placeholder pseudo-element, which apples to <input> elements:
let input = document.querySelector('input');
window.getComputedStyle(input, '::placeholder').color
The above works in the latest Firefox, but not in Chrome or Edge (I’ve filed a bug report for Chrome).
It should also be noted that browsers have different results when trying to access styles for a non-existent (but valid) pseudo-element compared to a pseudo-element that the browser doesn’t support at all (like a made up ::banana pseudo-element). You can try this out in various browsers using the following demo:
As a side point to this section, there is a Firefox-only method called getDefaultComputedStyle() that is not part of the spec and likely never will be.
The CSSStyleDeclaration API
Earlier when I showed you how to access properties via the style object or using getComputedStyle(), in both cases those techniques were exposing the CSSStyleDeclaration interface.
In other words, both of the following lines will return a CSSStyleDeclaration object on the document’s body element:
document.body.style;
window.getComputedStyle(document.body);
In the following screenshot you can see what the console produces for each of these lines:
In the case of getComputedStyle(), the values are read-only. In the case of element.style, getting and setting the values is possible but, as mentioned earlier, these will only affect the document’s inline styles.
setProperty(), getPropertyValue(), and item()
Once you’ve exposed a CSSStyleDeclaration object in one of the above ways, you have access to a number of useful methods to read or manipulate the values. Again, the values are read-only in the case of getComputedStyle(), but when used via the style property, some methods are available for both getting and setting.
Consider the following code and demo:
let box = document.querySelector('.box');
box.style.setProperty('color', 'orange');
box.style.setProperty('font-family', 'Georgia, serif');
op.innerHTML = box.style.getPropertyValue('color');
op2.innerHTML = `${box.style.item(0)}, ${box.style.item(1)}`;
In this example, I’m using three different methods of the style object:
The setProperty()method. This takes two arguments, each a string: The property (in regular CSS notation) and the value you wish to assign to the property.
The getPropertyValue()method. This takes a single argument: The property whose value you want to obtain. This method was used in a previous example usinggetComputedStyle(), which, as mentioned, likewise exposes aCSSStyleDeclarationobject.
The item()method. This takes a single argument, which is a positive integer representing the index of the property you want to access. The return value is the property name at that index.
Keep in mind that in my simple example above, there are only two styles added to the element’s inline CSS. This means that if I were to access item(2), the return value would be an empty string. I’d get the same result if I used getPropertyValue() to access a property that isn’t set in that element’s inline styles.
Using removeProperty()
In addition to the three methods mentioned above, there are two others exposed on a CSSStyleDeclaration object. In the following code and demo, I’m using the removeProperty() method:
box.style.setProperty('font-size', '1.5em');
box.style.item(0) // "font-size"
document.body.style.removeProperty('font-size');
document.body.style.item(0); // ""
In this case, after I set font-size using setProperty(), I log the property name to ensure it’s there. The demo then includes a button that, when clicked, will remove the property using removeProperty().
In the case of setProperty() and removeProperty(), the property name that you pass in is hyphenated (the same format as in your stylesheet), rather than camel-cased. This might seem confusing at first, but the value passed in is a string in this example, so it makes sense.
Getting and Setting a Property’s Priority
Finally, here’s an interesting feature that I discovered while researching this article: The getPropertyPriority() method, demonstrated with the code and CodePen below:
box.style.setProperty('font-family', 'Georgia, serif', 'important');
box.style.setProperty('font-size', '1.5em');
box.style.getPropertyPriority('font-family'); // important
op2.innerHTML = box.style.getPropertyPriority('font-size'); // ""
In the first line of that code, you can see I’m using the setProperty() method, as I did before. However, notice I’ve included a third argument. The third argument is an optional string that defines whether you want the property to have the !important keyword attached to it.
After I set the property with !important, I use the getPropertyPriority() method to check that property’s priority. If you want the property to not have importance, you can omit the third argument, use the keyword undefined, or include the third argument as an empty string.
And I should emphasize here that these methods would work in conjunction with any inline styles already placed directly in the HTML on an element’s style attribute.
So if I had the following HTML:
<div class="box" style="border: solid 1px red !important;">
I could use any of the methods discussed in this section to read or otherwise manipulate that style. And it should be noted here that since I used a shorthand property for this inline style and set it to !important, all of the longhand properties that make up that shorthand will return a priority of important when using getPropertyPriority(). See the code and demo below:
// These all return "important"
box.style.getPropertyPriority('border'));
box.style.getPropertyPriority('border-top-width'));
box.style.getPropertyPriority('border-bottom-width'));
box.style.getPropertyPriority('border-color'));
box.style.getPropertyPriority('border-style'));
In the demo, even though I explicitly set only the border property in the style attribute, all the associated longhand properties that make up border will also return a value of important.
The CSSStyleSheet Interface
So far, much of what I’ve considered deals with inline styles (which often aren’t that useful) and computed styles (which are useful, but are often too specific).
A much more useful API that allows you to retrieve a stylesheet that has readable and writable values, and not just for inline styles, is the CSSStyleSheet API. The simplest way to access information from a document’s stylesheets is using the styleSheets property of the current document. This exposes the CSSStyleSheet interface.
For example, the line below uses the length property to see how many stylesheets the current document has:
document.styleSheets.length; // 1
I can reference any of the document’s stylesheets using zero-based indexing:
document.styleSheets[0];
If I log that stylesheet to my console, I can view the methods and properties available:
The one that will prove useful is the cssRules property. This property provides a list of all CSS rules (including declaration blocks, at-rules, media rules, etc.) contained in that stylesheet. In the following sections, I’ll detail how to utilize this API to manipulate and read styles from an external stylesheet.
Working with a Stylesheet Object
For the purpose of simplicity, let’s work with a sample stylesheet that has only a handful of rules in it. This will allow me to demonstrate how to use the CSSOM to access the different parts of a stylesheet in a similar way to accessing elements via DOM scripting.
Here is the stylesheet I’ll be working with:
* {
box-sizing: border-box;
}
body {
font-family: Helvetica, Arial, sans-serif;
font-size: 2em;
line-height: 1.4;
}
main {
width: 1024px;
margin: 0 auto !important;
}
.component {
float: right;
border-left: solid 1px #444;
margin-left: 20px;
}
@media (max-width: 800px) {
body {
line-height: 1.2;
}
.component {
float: none;
margin: 0;
}
}
a:hover {
color: lightgreen;
}
@keyframes exampleAnimation {
from {
color: blue;
}
20% {
color: orange;
}
to {
color: green;
}
}
code {
color: firebrick;
}
There’s a number of different things I can attempt with this example stylesheet and I’ll demonstrate a few of those here. First, I’m going to loop through all the style rules in the stylesheet and log the selector text for each one:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('p');
for (i of myRules) {
if (i.type === 1) {
p.innerHTML += `<code>${i.selectorText}</code><br>`;
}
}
A couple of things to take note of in the above code and demo. First, I cache a reference to the cssRules object for my stylesheet. Then I loop over all the rules in that object, checking to see what type each one is.
In this case, I want rules that are type 1, which represents the STYLE_RULE constant. Other constants include IMPORT_RULE (3), MEDIA_RULE (4), KEYFRAMES_RULE (7), etc. You can view a full table of these constants in this MDN article.
When I confirm that a rule is a style rule, I print the selectorText property for each of those style rules. This will produce the following lines for the specified stylesheet:
* body main .component a:hover code
The selectorText property is a string representation of the selector used on that rule. This is a writable property, so if I want I can change the selector for a specific rule inside my original for loop with the following code:
if (i.selectorText === 'a:hover') {
i.selectorText = 'a:hover, a:active';
}
In this example, I’m looking for a selector that defines :hover styles on my links and expanding the selector to apply the same styles to elements in the :active state. Alternatively, I could use some kind of string method or even a regular expression to look for all instances of :hover, and then do something from there. But this should be enough to demonstrate how it works.
Accessing @media Rules with the CSSOM
You’ll notice my stylesheet also includes a media query rule and a keyframes at-rule block. Both of those were skipped when I searched for style rules (type 1). Let’s now find all @media rules:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('.output');
for (i of myRules) {
if (i.type === 4) {
for (j of i.cssRules) {
p.innerHTML += `<code>${j.selectorText}</code><br>`;
}
}
}
Based on the given stylesheet, the above will produce:
body .component
As you can see, after I loop through all the rules to see if any @media rules exist (type 4), I then loop through the cssRules object for each media rule (in this case, there’s only one) and log the selector text for each rule inside that media rule.
So the interface that’s exposed on a @media rule is similar to the interface exposed on a stylesheet. The @media rule, however, also includes a conditionText property, as shown in the following snippet and demo:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('.output');
for (i of myRules) {
if (i.type === 4) {
p.innerHTML += `<code>${i.conditionText}</code><br>`;
// (max-width: 800px)
}
}
This code loops through all media query rules and logs the text that determines when that rule is applicable (i.e. the condition). There’s also a mediaText property that returns the same value. According to the spec, you can get or set either of these.
Accessing @keyframes Rules with the CSSOM
Now that I’ve demonstrated how to read information from a @media rule, let’s consider how to access a @keyframes rule. Here’s some code to get started:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('.output');
for (i of myRules) {
if (i.type === 7) {
for (j of i.cssRules) {
p.innerHTML += `<code>${j.keyText}</code><br>`;
}
}
}
In this example, I’m looking for rules that have a type of 7 (i.e. @keyframes rules). When one is found, I loop through all of that rule’s cssRules and log the keyText property for each. The log in this case will be:
"0%" "20%" "100%"
You’ll notice my original CSS uses from and to as the first and last keyframes, but the keyText property computes these to 0% and 100%. The value of keyText can also be set. In my example stylesheet, I could hard code it like this:
// Read the current value (0%)
document.styleSheets[0].cssRules[6].cssRules[0].keyText;
// Change the value to 10%
document.styleSheets[0].cssRules[6].cssRules[0].keyText = '10%'
// Read the new value (10%)
document.styleSheets[0].cssRules[6].cssRules[0].keyText;
Using this, we can dynamically alter an animation’s keyframes in the flow of a web app or possibly in response to a user action.
Another property available when accessing a @keyframes rule is name:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('.output');
for (i of myRules) {
if (i.type === 7) {
p.innerHTML += `<code>${i.name}</code><br>`;
}
}
Recall that in the CSS, the @keyframes rule looks like this:
@keyframes exampleAnimation {
from {
color: blue;
}
20% {
color: orange;
}
to {
color: green;
}
}
Thus, the name property allows me to read the custom name chosen for that @keyframes rule. This is the same name that would be used in the animation-name property when enabling the animation on a specific element.
One final thing I’ll mention here is the ability to grab specific styles that are inside a single keyframe. Here’s some example code with a demo:
let myRules = document.styleSheets[0].cssRules,
p = document.querySelector('.output');
for (i of myRules) {
if (i.type === 7) {
for (j of i.cssRules) {
p.innerHTML += `<code>${j.style.color}</code><br>`;
}
}
}
In this example, after I find the @keyframes rule, I loop through each of the rules in the keyframe (e.g. the “from” rule, the “20%” rule, etc). Then, within each of those rules, I access an individual style property. In this case, since I know color is the only property defined for each, I’m merely logging out the color values.
The main takeaway in this instance is the use of the style property, or object. Earlier I showed how this property can be used to access inline styles. But in this case, I’m using it to access the individual properties inside of a single keyframe.
You can probably see how this opens up some possibilities. This allows you to modify an individual keyframe’s properties on the fly, which could happen as a result of some user action or something else taking place in an app or possibly a web-based game.
Adding and Removing CSS Declarations
The CSSStyleSheet interface has access to two methods that allow you to add or remove an entire rule from a stylesheet. The methods are: insertRule() and deleteRule(). Let’s see both of them in action manipulating our example stylesheet:
let myStylesheet = document.styleSheets[0];
console.log(myStylesheet.cssRules.length); // 8
document.styleSheets[0].insertRule('article { line-height: 1.5; font-size: 1.5em; }', myStylesheet.cssRules.length);
console.log(document.styleSheets[0].cssRules.length); // 9
In this case, I’m logging the length of the cssRules property (showing that the stylesheet originally has 8 rules in it), then I add the following CSS as an individual rule using the insertRule() method:
article {
line-height: 1.5;
font-size: 1.5em;
}
I log the length of the cssRules property again to confirm that the rule was added.
The insertRule() method takes a string as the first parameter (which is mandatory), comprising the full style rule that you want to insert (including selector, curly braces, etc). If you’re inserting an at-rule, then the full at-rule, including the individual rules nested inside the at-rule can be included in this string.
The second argument is optional. This is an integer that represents the position, or index, where you want the rule inserted. If this isn’t included, it defaults to 0 (meaning the rule will be inserted at the beginning of the rules collection). If the index happens to be larger than the length of the rules object, it will throw an error.
The deleteRule() method is much simpler to use:
let myStylesheet = document.styleSheets[0];
console.log(myStylesheet.cssRules.length); // 8
myStylesheet.deleteRule(3);
console.log(myStylesheet.cssRules.length); // 7
In this case, the method accepts a single argument that represents the index of the rule I want to remove.
With either method, because of zero-based indexing, the selected index passed in as an argument has to be less than the length of the cssRules object, otherwise it will throw an error.
Revisiting the CSSStyleDeclaration API
Earlier I explained how to access individual properties and values declared as inline styles. This was done via element.style, exposing the CSSStyleDeclaration interface.
The CSSStyleDeclaration API, however, can also be exposed on an individual style rule as a subset of the CSSStyleSheet API. I already alluded to this when I showed you how to access properties inside a @keyframes rule. To understand how this works, compare the following two code snippets:
<div style="color: lightblue; width: 100px; font-size: 1.3em !important;"></div>
.box {
color: lightblue;
width: 100px;
font-size: 1.3em !important;
}
The first example is a set of inline styles that can be accessed as follows:
document.querySelector('div').style
This exposes the CSSStyleDeclaration API, which is what allows me to do stuff like element.style.color, element.style.width, etc.
But I can expose the exact same API on an individual style rule in an external stylesheet. This means I’m combining my use of the style property with the CSSStyleSheet interface.
So the CSS in the second example above, which uses the exact same styles as the inline version, can be accessed like this:
document.styleSheets[0].cssRules[0].style
This opens up a single CSSStyleDeclaration object on the one style rule in the stylesheet. If there were multiple style rules, each could be accessed using cssRules[1], cssRules[2], cssRules[3], and so on.
So within an external stylesheet, inside of a single style rule that is of type 1, I have access to all the methods and properties mentioned earlier. This includes setProperty(), getPropertyValue(), item(), removeProperty(), and getPropertyPriority(). In addition to this, those same features are available on an individual style rule inside of a @keyframes or @media rule.
Here’s a code snippet and demo that demonstrates how these methods would be used on an individual style rule in our sample stylesheet:
// Grab the style rules for the body and main elements
let myBodyRule = document.styleSheets[0].cssRules[1].style,
myMainRule = document.styleSheets[0].cssRules[2].style;
// Set the bg color on the body
myBodyRule.setProperty('background-color', 'peachpuff');
// Get the font size of the body
myBodyRule.getPropertyValue('font-size');
// Get the 5th item in the body's style rule
myBodyRule.item(5);
// Log the current length of the body style rule (8)
myBodyRule.length;
// Remove the line height
myBodyRule.removeProperty('line-height');
// log the length again (7)
myBodyRule.length;
// Check priority of font-family (empty string)
myBodyRule.getPropertyPriority('font-family');
// Check priority of margin in the "main" style rule (!important)
myMainRule.getPropertyPriority('margin');
The CSS Typed Object Model… The Future?
After everything I’ve considered in this article, it would seem odd that I’d have to break the news that it’s possible that one day the CSSOM as we know it will be mostly obsolete.
That’s because of something called the CSS Typed OM which is part of the Houdini Project. Although some people have noted that the new Typed OM is more verbose compared to the current CSSOM, the benefits, as outlined in this article by Eric Bidelman, include:
Fewer bugs
Arithmetic operations and unit conversion
Better performance
Error handling
CSS property names are always strings
For full details on those features and a glimpse into the syntax, be sure to check out the full article.
As of this writing, CSS Typed OM is supported only in Chrome. You can see the progress of browser support in this document.
Final Words
Manipulating stylesheets via JavaScript certainly isn’t something you’re going to do in every project. And some of the complex interactions made possible with the methods and properties I’ve introduced here have some very specific use cases.
If you’ve built some kind of tool that uses any of these APIs I’d love to hear about it. My research has only scratched the surface of what’s possible, but I’d love to see how any of this can be used in real-world examples.
I’ve put all the demos from this article into a CodePen collection, so you can feel free to mess around with those as you like. |
对于网络应用来说,目前最安全的做法是仍然坚持使用 Python 2.x,即使是新的项目。一个简单的原因是现在 Python 3 还不支持足够多的库,而将已有的库移植到 Python 3 上是一个巨大的工作。当所有人都在抱怨升级到 Python 3 是如此艰难和痛苦的时候,我们如何才能让这件事变得容易一点呢?
对于一个顶层应用来说,如果它的依赖库移植后行为一致,把它升级到 Python 3 就不难了。其实升级到 Python 3 从来都不应该是一件痛苦的事。因此,本文尝试列举一些编写新的代码时应该和不应该做的事。
如果你要编写一个新项目,就从 Python 2.6 或 2.7 开始,它们有许多升级到 Python 3 的便利。如果你不打算支持旧版本的 Python 你已经可以使用许多 Python 3 中的新特性了,只要在代码中打开就行了。
你应该使用的一些 __future__ 中的特性:
division我必须承认我非常讨厌 Python 2 中的 future division。当我审核代码时我需要不停地跳到文件开头来检查用的是哪种除法机制。然而这是 Python 3 中的默认除法机制,所以你需要使用它。
absolute_import最重要的特性。当你在 foo 包内部时,from xml import bar不再导入一个foo.xml的模块,你需要改为from .xml import bar。更加清晰明了,帮助很大。
至于函数形式的 print 导入,为了代码清晰,我不建议使用它。因为所有的编辑器会将print 作为关键字高亮,这此让人产生困惑。如果一件事情在不同的文件里表现不一致我们最好尽可能避免它。好在用 2to3 工具可以很方便地转换,所以我们完全没必要从 future 中导入它。
最好不要从 future 中导入 unicode_literals,尽管它看上去很吸引人。原因很简单,许多 API 在不同地方支持的字符串类型是不同的,unicode_literals 会产生反作用。诚然,这个导入在某些情况下很有用,但它更多地受制于底层的接口(库),且由于它是 Python 2.6 的特性,有许多库支持这个导入。不需要导入 unicode_literals 你就能使用 b'foo' 这样的写法,两种方法都是可用的并且对 2to3 工具很有帮助。
文件的输入输出在 Python 3 中改变很大。你终于不用在为新项目开发 API 时费尽心力处理文件 unicode 编码的问题了。
当你处理文本数据时,使用 codecs.open 来打开文件。默认使用 utf-8 编码除非显式地定义或者只对 unicode 字符串操作。若你决定使用二进制输入输出,打开文件时记得用 'rb' 而不是 'r' 标志。这对于适当的 Windows 支持来说是必要的。
当你处理字节型数据时,使用 b'foo' 将字符串标为字节型,这样 2to3 就不会将它转换为 unicode。注意以下 Python 2.6:
>>> b'foo' 'foo' >>> b'foo'[0] 'f' >>> b'foo' + u'bar' u'foobar' >>> list(b'foo') ['f', 'o', 'o']
与 Python 3 对待字节型字符串的区别:
>>> b'foo'[0] 102 >>> b'foo' + 'bar' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't concat bytes to str >>> list(b'foo') [102, 111, 111] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't concat bytes to str
为了达成与 Python 2.6 同样的效果,你可以这样做:
>>> b'foo'[0:0 + 1] b'f' >>> b'foo' + 'bar'.encode('latin1') b'foobar' >>> to_charlist = lambda x: [x[c:c + 1] for c in range(len(x))] >>> to_charlist(b'foo') [b'f', b'o', b'o']
此代码在 2.6 和 3.x 上均能正常工作。
在很多事情上 2to3 并不能达到预期效果。一部分是 2to3 可能有 BUG 的地方,另外的则是因为 2to3 不能很好的预测你的代码的目的。
在 Python 2 中很多人像下面这样写代码:
class Foo(object): def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): return u'Hello World'
2to3 预设你的 API 不兼容 unicode ,会将它转换成下面这样:
class Foo(object): def __str__(self): return str(self).encode('utf-8') def __unicode__(self): return 'Hello World'
这就有错误了。首先 __unicode__ 不能在 Python 3 中使用,其次当你对 Foo 的一个实例调用 str() 方法时,__str__ 将调用自身而由于无限递归触发一个 RuntimeError。这个错误可以通过自定义 2to3 修改器解决,也可以写一个简单的辅助类来检查是否是 Python 3:
import sys class UnicodeMixin(object): if sys.version_info > (3, 0): __str__ = lambda x: x.__unicode__() else: __str__ = lambda x: unicode(x).encode('utf-8') class Foo(UnicodeMixin): def __unicode__(self): return u'Hello World'
用这种方法你的对象在 Python 3 中仍然有一个 __unicode__ 属性,但却不会有任何损害。当你想去掉 Python 2 支持时你只需遍历 UnicodeMixin 的所有派生类,将 __unicode__ 重命名为 __str__,然后再删掉辅助类。
这个问题会稍微棘手一点,在 Python 2 中下面这段代码是正确的:
>>> 'foo' == u'foo' True
在 Python 3 中却并非如此:
>>> b'foo' == 'foo' False
更糟糕的是 Python 2 不会抛出一个比较的警告(即使打开了 Python-3-warnings),Python 3 也不会。那么你如何找到问题所在呢?我写了一个名为 unicode-nazi 的小型辅助模块。只要导入该模块,当你试图同时操作 unicode 和 bytes 型字符串时会自动抛出警告:
>>> import unicodenazi >>> u'foo' == 'foo' __main__:1: UnicodeWarning: Implicit conversion of str to unicode True
下面这张表列举了一些字节型字符串,和它们在 Python 3 中将变成什么:
类型 Python 3 中的类型(unicode == str)
标识 unicode
文档字符串 unicode
__repr__ unicode
字典的字符键 unicode
WSGI 的环境变量键 unicode
HTTP 的 header 值,WSGI 的 环境变量值 unicode,在 3.1 中仅限于 ASCII,在 3.2 中仅限于 latin1
URL unicode,部分 API 也接受字节。需要特别注意的是,为了使用所有标准库函数,URL 需要编码为 utf-8
文件名 unicode 或者字节,大部分 API 接受两者但不支持隐式转换。
二进制内容 字节或字节序列。注意第二种类型是可变的,所以你要清醒认识到你的字符串对象是可变的。
Python 代码 unicode,在交给 exec 执行前你需要自行解码。
在某些地方(比如 WSGI)unicode 字符串必须是 latin1 的子集。这是因为 HTTP 协议并未指定编码方式,为了保证安全,假定为使用 latin1 。假如你要同时控制通信的两端(比如 cookies)你当然可以使用 utf-8 编码。那么问题来了:如果请求头只能是 latin1 编码时是怎么工作的呢?在且仅在 Python 3 中你需要用一些小伎俩:
return cookie_value.encode('utf-8').decode('latin1')
你只是反 unicode 字符串伪编码为 utf-8。WSGI 层会将它重新编码为 latin1 并将这个错误的 utf-8 字符串传输出去,你只要在接收端也做一个反向的变换就可以了。
这虽然很丑陋,但这就是 utf-8 在请求头中的工作方式,而且也只有 cookie 头受此影响,反正 cookie 头也不是很可靠。
在 WSGI 还剩下的问题就只有 PATHINFO / SCRIPTNAME 元组了,你的框架运行在 Python 3 时应该解决这个问题。 |
bert-base-en-fr-de-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-de-cased")
To generate other smaller versions of multilingual transformers please visit our Github repo.
How to cite
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
Contact
Please contact amine@geotrend.fr for any question, feedback or request.
Downloads last month
0 |
Googlemapでの文字化け解決について
解決済
回答 1
投稿
評価
クリップ 0
VIEW 1,456
score 13
前提・実現したいこと
Pythonでクリップボード内容を扱ってGoogleマップを起動したいです。
日本語入力実装中に以下のエラーが発生しました。
発生している問題・エラーメッセージ
数字や英語で入力すると記述通りに起動するのですが日本語の文字列入力をすると文字化けした住所が表示されます。
"東京"と入力した場合の表示åFñ{
該当のソースコード
import webbrowser,sys,pyperclip
a=u'東京'
pyperclip.copy(str(a))
if len(sys.argv)>1:
adderss=''.join(sys.argv[1:])
else:
address = pyperclip.paste()
webbrowser.open('https://www.google.com/maps/place/'+address)
試したこと
Shift-jisで入力するもエラーメッセージ発生
ユニコードでの入力
気になる質問をクリップする
クリップした質問は、後からいつでもマイページで確認できます。
またクリップした質問に回答があった際、通知やメールを受け取ることができます。
クリップを取り消します
良い質問の評価を上げる
以下のような質問は評価を上げましょう
質問内容が明確
自分も答えを知りたい
質問者以外のユーザにも役立つ
評価が高い質問は、TOPページの「注目」タブのフィードに表示されやすくなります。
質問の評価を上げたことを取り消します
評価を下げられる数の上限に達しました
評価を下げることができません
1日5回まで評価を下げられます
1日に1ユーザに対して2回まで評価を下げられます
質問の評価を下げる
teratailでは下記のような質問を「具体的に困っていることがない質問」、「サイトポリシーに違反する質問」と定義し、推奨していません。
プログラミングに関係のない質問
やってほしいことだけを記載した丸投げの質問
問題・課題が含まれていない質問
意図的に内容が抹消された質問
過去に投稿した質問と同じ内容の質問
広告と受け取られるような投稿
評価が下がると、TOPページの「アクティブ」「注目」タブのフィードに表示されにくくなります。
質問の評価を下げたことを取り消します
この機能は開放されていません
評価を下げる条件を満たしてません
15分調べてもわからないことは、teratailで質問しよう!
ただいまの回答率 88.34%
質問をまとめることで、思考を整理して素早く解決
テンプレート機能で、簡単に質問をまとめられる |
Súprava Smart Home Kit
S úprava Smart Home Kit je tvorená z dvanástich modulov, prepojovacích káblov, držiaka batérií AAA, dátového kábla USB, skrutkovača a návodu. Hlavný modul sa volá sensor:bit a umožňuje pripojenie ďalších modulov. Tento modul disponuje konektormi typu GVS a IIC, bzučiakom, konektorom pre slúchadlá a portom na zapojenie mikropočítača.
Zvyšnými jedenástimi modulmi opísanej súpravy sú (v zátvorke je uvedený technický parameter, prípadne merací rozsah alebo jednotka):
OLED displej (128x64pixelov)
snímač nárazu (0 alebo 1)
snímač teploty (˚C)
RGB LED dióda (0-255)
snímač hluku (dB)
snímač intenzity svetla (0-100)
snímač vlhkosti pôdy (0-100)
relé (0 alebo 1)
jednosmerný motor (0 alebo 1)
180-stupňový servomotor (0-180)
ponorné čerpadlo (3,3-4,5V)
Samotné moduly vypovedajú, že sa jedná o súpravu so stredným stupňom náročnosti avšak zvládnuteľným poznania chtivým začiatočníkom. Programovanie v platforme Microsoft MakeCode je našťastie uľahčené existenciou rozširujúceho Bloku Smarthome obsahujúceho Bloky OLED, Smarthome a Neopixel.
Ako býva mojim zvykom, začal som vytvorením skúšobného programu, ktorý mal za úlohu preveriť funkčnosť všetkých modulov súpravy Smart Home Kit. Bolo nevyhnutné sa zamyslieť nad tým, v akom poradí pripojiť jednotlivé moduly. Pre energetickú náročnosť sa mi nepodarilo zapojiť súčasne jednosmerný motor, relé a ponorné čerpadlo. Nakoniec som zvolil nasledovný ‚layout‘ zapojených pinov mikropočítača micro:bit:
P1– snímač vlhkosti pôdy
P2– snímač teploty
P3– snímač intenzity svetla
P4– snímač nárazu
P8– RGB LED dióda
P10– snímač hluku
P13– 180-stupňový servomotor
P19aP20– OLED displej
Potom som sa pustil do programovania a následného testovania, zavŕšením ktorého som získal finálnu verziu môjho programu:
Program JavaScript
let item = 0
input.onButtonPressed(Button.A, function () {
OLED.newLine()
OLED.writeString("VLHKOST PODY (0-100):")
OLED.writeNum(smarthome.ReadSoilHumidity(AnalogPin.P1))
OLED.newLine()
OLED.writeString("TEPLOTA (C): ")
OLED.writeNum(smarthome.ReadTemperature(TMP36Type.TMP36_temperature_C, AnalogPin.P2))
OLED.newLine()
OLED.writeString("INTENZITA SVETLA (0-100): ")
OLED.writeNum(smarthome.ReadLightIntensity(AnalogPin.P3))
basic.pause(10000)
OLED.clear()
OLED.writeString("HLUK (dB): ")
OLED.writeNum(smarthome.ReadNoise(AnalogPin.P10))
OLED.newLine()
OLED.writeString("FARBA DIODY: ")
OLED.writeString("modra")
strip.showColor(neopixel.colors(NeoPixelColors.Blue))
OLED.newLine()
OLED.writeString("SERVO (0-180): ")
pins.servoWritePin(AnalogPin.P13, 180)
OLED.writeNum(180)
OLED.newLine()
OLED.writeString("NARAZ (0alebo1): ")
OLED.writeNum(pins.digitalReadPin(DigitalPin.P4))
basic.pause(10000)
pins.servoWritePin(AnalogPin.P13, 0)
strip.clear()
strip.show()
OLED.clear()
})
let strip: neopixel.Strip = null
led.enable(false)
music.beginMelody(music.builtInMelody(Melodies.PowerUp), MelodyOptions.Once)
OLED.init(128, 64)
pins.setPull(DigitalPin.P4, PinPullMode.PullUp)
strip = neopixel.create(DigitalPin.P8, 1, NeoPixelMode.RGB)
Program MakeCode
Video
Po vzhliadnutí pripojených modulov zmienenej súpravy v plnej prevádzke, najviac ma oslovil snímač vlhkosti pôdy použiteľný aj na sledovanie hladiny vody alebo vlhkosti pokožky.
Záver
So záverom prišiel čas na poďakovanie, že ste si prečítali môj článok o súprave Smart Home Kit. Rôznymi kombináciami jej modulov sa dá vytvoriť veľa pozoruhodných programov, ktoré je možné do určitej miery aplikovať v praxi. Jeden z nich vám predstavím nabudúce.
E-shop
Elecfreaks Smart Home Kit (bez BBC micro:bit)
Projektový box pro vytvoření chytré domácnosti!
BBC micro:bit
Nekonečně programovatelný a rozšiřitelný vzdělávací počítač do kapsy. |
Description
Given an array of integers and an integer k, you need to find the total number of continuous subarrays whose sum equals to k.
Example 1:
Input:nums = [1,1,1], k = 2Output:2
Note:
The length of the array is in range [1, 20,000].
The range of numbers in the array is [-1000, 1000] and the range of the integer kis [-1e7, 1e7].
Explanation
store complement to help counting
Python Solution
class Solution:
def subarraySum(self, nums: List[int], k: int) -> int:
count = 0
sum = 0
map = {}
map[0] = 1
for i in range(0, len(nums)):
sum += nums[i]
if (sum - k) in map:
count += map.get(sum - k)
map[sum] = map.get(sum, 0) + 1
return count
Time complexity: O(N).
Space complexity: O(N). |
Numpy 基础运算1
让我们从一个脚本开始了解相应的计算以及表示形式 :
import numpy as np
a=np.array([10,20,30,40]) # array([10, 20, 30, 40])
b=np.arange(4) # array([0, 1, 2, 3])
numpy 的几种基本运算 ¶
上述代码中的 a 和 b 是两个属性为 array 也就是矩阵的变量,而且二者都是1行4列的矩阵,其中b矩阵中的元素分别是从0到3。如果我们想要求两个矩阵之间的减法,你可以尝试着输入:
c=a-b # array([10, 19, 28, 37])
通过执行上述脚本,将会得到对应元素相减的结果,即[10,19,28,37]。同理,矩阵对应元素的相加和相乘也可以用类似的方式表示:
c=a+b # array([10, 21, 32, 43])
c=a*b # array([ 0, 20, 60, 120])
有所不同的是,在Numpy中,想要求出矩阵中各个元素的乘方需要依赖双星符号 **,以二次方举例,即:
c=b**2 # array([0, 1, 4, 9])
另外,Numpy中具有很多的数学函数工具,比如三角函数等,当我们需要对矩阵中每一项元素进行函数运算时,可以很简便的调用它们(以sin函数为例):
c=10*np.sin(a)
# array([-5.44021111, 9.12945251, -9.88031624, 7.4511316 ])
除了函数应用外,在脚本中对print函数进行一些修改可以进行逻辑判断:
print(b<3)
# array([ True, True, True, False], dtype=bool)
此时由于进行逻辑判断,返回的是一个bool类型的矩阵,即对满足要求的返回True,不满足的返回False。上述程序执行后得到的结果是[True True True False]。需要注意的是,如果想要执行是否相等的判断,依然需要输入 == 而不是 = 来完成相应的逻辑判断。
上述运算均是建立在一维矩阵,即只有一行的矩阵上面的计算,如果我们想要对多行多维度的矩阵进行操作,需要对开始的脚本进行一些修改:
a=np.array([[1,1],[0,1]])
b=np.arange(4).reshape((2,2))
print(a)
# array([[1, 1],
# [0, 1]])
print(b)
# array([[0, 1],
# [2, 3]])
此时构造出来的矩阵a和b便是2行2列的,其中 reshape 操作是对矩阵的形状进行重构,其重构的形状便是括号中给出的数字。稍显不同的是,Numpy中的矩阵乘法分为两种,其一是前文中的对应元素相乘,其二是标准的矩阵乘法运算,即对应行乘对应列得到相应元素:
c_dot = np.dot(a,b)
# array([[2, 4],
# [2, 3]])
除此之外还有另外的一种关于dot的表示方法,即:
c_dot_2 = a.dot(b)
# array([[2, 4],
# [2, 3]])
下面我们将重新定义一个脚本, 来看看关于 sum(), min(), max()的使用:
import numpy as np
a=np.random.random((2,4))
print(a)
# array([[ 0.94692159, 0.20821798, 0.35339414, 0.2805278 ],
# [ 0.04836775, 0.04023552, 0.44091941, 0.21665268]])
因为是随机生成数字, 所以你的结果可能会不一样.在第二行中对a的操作是令a中生成一个2行4列的矩阵,且每一元素均是来自从0到1的随机数。在这个随机生成的矩阵中,我们可以对元素进行求和以及寻找极值的操作,具体如下:
np.sum(a) # 4.4043622002745959
np.min(a) # 0.23651223533671784
np.max(a) # 0.90438450240606416
对应的便是对矩阵中所有元素进行求和,寻找最小值,寻找最大值的操作。可以通过print()函数对相应值进行打印检验。
如果你需要对行或者列进行查找运算,就需要在上述代码中为 axis 进行赋值。 当axis的值为0的时候,将会以列作为查找单元, 当axis的值为1的时候,将会以行作为查找单元。
为了更加清晰,在刚才的例子中我们继续进行查找:
print("a =",a)
# a = [[ 0.23651224 0.41900661 0.84869417 0.46456022]
# [ 0.60771087 0.9043845 0.36603285 0.55746074]]
print("sum =",np.sum(a,axis=1))
# sum = [ 1.96877324 2.43558896]
print("min =",np.min(a,axis=0))
# min = [ 0.23651224 0.41900661 0.36603285 0.46456022]
print("max =",np.max(a,axis=1))
# max = [ 0.84869417 0.9043845 ]
|
Recently we have learned that a certain branch of the government may be overstepping the consitutional right to privacy. While this may be old news for many the recent leaks by Edward Snowden have brought the issue to national attention and have caused quite a stir.
And rightly so, the argument can be made that the NSA is violating the forth amendment. Restore the Fourth, a grassroots organization that sprung up nearly overnight, is making that argument and taking it to the public.
From the website:
Restore the Fourth is a grassroots, non-partisan movement; we believe the government of the United States must respect the right to privacy of all its citizens as the Fourth Amendment clearly states. We seek to bring awareness to the abuses against our civil liberties and the erosion of this cornerstone of our democracy.
I’m sympathetic with this cause. And despite most likely being placed on a government watch list for the rest of my life, I decided to take a crack at building these folks a website. The decision was made to use Django, a neat framework written in Python.
One of the most interesting features of the website is the map on the front page. The locations are set by pulling objects from the database that represent protests. These objects are editable in the back end by regular admins using Django’s awesome gui admin interface so developers do not need to be involved.
When creating a protest via the admin interface you don’t need to supply a latitude/longitude pair (which are needed for google maps) but instead the backend utilizes the Google Maps Geolocation Api so all that’s needed is a city and/or state. To make that process easy, we’re using geopy, a geolocation library for Python. In the Protest model, we can simply call this:
def generateLatLong(self): g = geocoders.GoogleV3() place, (lat, lng) = g.geocode("{0} {1}".format(self.state, self.city),exactly_one=True) self.latitude = lat self.longitude = lng
Then in our views, we make a method to serialize these objects as json:
def protestsjson(request): protest_list = Protest.objects.all() data = serializers.serialize('json', protest_list) return HttpResponse(data, mimetype="application/json")
From there, its just a matter of telling the google map to use those objects to create markers:
var map = new EventsMap(); $.getJSON('/protests.json', function(data){ map.plotStaticMarkers(data); }); // plot new markers on the map, make them interactive this.plotStaticMarkers = function(data){ $.each(data, function (i, location) { var latlng = new google.maps.LatLng(location.fields.latitude, location.fields.longitude); var marker = new google.maps.Marker({ map: map, position: latlng, title: location.fields.city }); google.maps.event.addListener(marker, 'click', function () { var content = infoWindowTemplate .replace('{location}', location.fields.city) .replace('{info}', location.pk); infoWindow.setContent(content); infoWindow.open(map, marker); }); }) }
And there we have it, a neat interactive map that doesn’t require a developer’s involvement to edit.
Check it out in action: restorethefourth.net
As usual, the source is available on Github |
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook
The phrase "Saving a TensorFlow model" typically means one of two things:
Checkpoints, OR
SavedModel.
Checkpoints capture the exact value of all parameters (tf.Variable objects) used by a model. Checkpoints do not contain any description of the computation defined by the model and thus are typically only useful when source code that will use the saved parameter values is available.
The SavedModel format on the other hand includes a serialized description of the computation defined by the model in addition to the parameter values (checkpoint). Models in this format are independent of the source code that created the model. They are thus suitable for deployment via TensorFlow Serving, TensorFlow Lite, TensorFlow.js, or programs in other programming languages (the C, C++, Java, Go, Rust, C# etc. TensorFlow APIs).
This guide covers APIs for writing and reading checkpoints.
Setup
import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net()
Saving from tf.keras training APIs
See the tf.keras guide on saving andrestoring.
tf.keras.Model.save_weights saves a TensorFlow checkpoint.
net.save_weights('easy_checkpoint')
Writing checkpoints
The easiest way to manage variables is by attaching them to Python objects, then referencing those objects.
Subclasses of tf.train.Checkpoint, tf.keras.layers.Layer, and tf.keras.Model automatically track variables assigned to their attributes. The following example constructs a simple linear model, then writes checkpoints which contain values for all of the model's variables.
You can easily save a model-checkpoint with Model.save_weights.
Manual checkpointing
Setup
To help demonstrate all the features of tf.train.Checkpoint, define a toy dataset and optimization step:
def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
"""Trains `net` on `example` using `optimizer`."""
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
Create the checkpoint objects
Use a tf.train.Checkpoint object to manually create a checkpoint, where the objects you want to checkpoint are set as attributes on the object.
A tf.train.CheckpointManager can also be helpful for managing multiple checkpoints.
opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
Train and checkpoint the model
The following training loop creates an instance of the model and of an optimizer, then gathers them into a tf.train.Checkpoint object. It calls the training step in a loop on each batch of data, and periodically writes checkpoints to disk.
def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager)
Initializing from scratch. Saved checkpoint for step 10: ./tf_ckpts/ckpt-1 loss 30.42 Saved checkpoint for step 20: ./tf_ckpts/ckpt-2 loss 23.83 Saved checkpoint for step 30: ./tf_ckpts/ckpt-3 loss 17.27 Saved checkpoint for step 40: ./tf_ckpts/ckpt-4 loss 10.81 Saved checkpoint for step 50: ./tf_ckpts/ckpt-5 loss 4.74
Restore and continue training
After the first training cycle you can pass a new model and manager, but pick up training exactly where you left off:
opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager)
Restored from ./tf_ckpts/ckpt-5 Saved checkpoint for step 60: ./tf_ckpts/ckpt-6 loss 0.85 Saved checkpoint for step 70: ./tf_ckpts/ckpt-7 loss 0.87 Saved checkpoint for step 80: ./tf_ckpts/ckpt-8 loss 0.71 Saved checkpoint for step 90: ./tf_ckpts/ckpt-9 loss 0.46 Saved checkpoint for step 100: ./tf_ckpts/ckpt-10 loss 0.21
The tf.train.CheckpointManager object deletes old checkpoints. Above it's configured to keep only the three most recent checkpoints.
print(manager.checkpoints) # List the three remaining checkpoints
['./tf_ckpts/ckpt-8', './tf_ckpts/ckpt-9', './tf_ckpts/ckpt-10']
These paths, e.g. './tf_ckpts/ckpt-10', are not files on disk. Instead they are prefixes for an index file and one or more data files which contain the variable values. These prefixes are grouped together in a single checkpoint file ('./tf_ckpts/checkpoint') where the CheckpointManager saves its state.
ls ./tf_ckpts
checkpoint ckpt-8.data-00000-of-00001 ckpt-9.index ckpt-10.data-00000-of-00001 ckpt-8.index ckpt-10.index ckpt-9.data-00000-of-00001
Loading mechanics
TensorFlow matches variables to checkpointed values by traversing a directed graph with named edges, starting from the object being loaded. Edge names typically come from attribute names in objects, for example the "l1" in self.l1 = tf.keras.layers.Dense(5). tf.train.Checkpoint uses its keyword argument names, as in the "step" in tf.train.Checkpoint(step=...).
The dependency graph from the example above looks like this:
The optimizer is in red, regular variables are in blue, and the optimizer slot variables are in orange. The other nodes—for example, representing the tf.train.Checkpoint—are in black.
Slot variables are part of the optimizer's state, but are created for a specific variable. For example the 'm' edges above correspond to momentum, which the Adam optimizer tracks for each variable. Slot variables are only saved in a checkpoint if the variable and the optimizer would both be saved, thus the dashed edges.
Calling restore on a tf.train.Checkpoint object queues the requested restorations, restoring variable values as soon as there's a matching path from the Checkpoint object. For example, you can load just the bias from the model you defined above by reconstructing one path to it through the network and the layer.
to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # This gets the restored value.
[0. 0. 0. 0. 0.] [2.831489 3.7156947 2.5892444 3.8669944 4.749503 ]
The dependency graph for these new objects is a much smaller subgraph of the larger checkpoint you wrote above. It includes only the bias and a save counter that tf.train.Checkpoint uses to number checkpoints.
restore returns a status object, which has optional assertions. All of the objects created in the new Checkpoint have been restored, so status.assert_existing_objects_matched passes.
status.assert_existing_objects_matched()
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f1644447b70>
There are many objects in the checkpoint which haven't matched, including the layer's kernel and the optimizer's variables. status.assert_consumed only passes if the checkpoint and the program match exactly, and would throw an exception here.
Delayed restorations
Layer objects in TensorFlow may delay the creation of variables to their first call, when input shapes are available. For example the shape of a Dense layer's kernel depends on both the layer's input and output shapes, and so the output shape required as a constructor argument is not enough information to create the variable on its own. Since calling a Layer also reads the variable's value, a restore must happen between the variable's creation and its first use.
To support this idiom, tf.train.Checkpoint queues restores which don't yet have a matching variable.
delayed_restore = tf.Variable(tf.zeros([1, 5]))
print(delayed_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = delayed_restore
print(delayed_restore.numpy()) # Restored
[[0. 0. 0. 0. 0.]] [[4.5719748 4.6099544 4.931875 4.836442 4.8496275]]
Manually inspecting checkpoints
tf.train.load_checkpoint returns a CheckpointReader that gives lower level access to the checkpoint contents. It contains mappings from each vartiable's key, to the shape and dtype for each variable in the checkpoint. A variable's key is its object path, like in the graphs displayed above.
reader = tf.train.load_checkpoint('./tf_ckpts/')
shape_from_key = reader.get_variable_to_shape_map()
dtype_from_key = reader.get_variable_to_dtype_map()
sorted(shape_from_key.keys())
['_CHECKPOINTABLE_OBJECT_GRAPH', 'iterator/.ATTRIBUTES/ITERATOR_STATE', 'net/l1/bias/.ATTRIBUTES/VARIABLE_VALUE', 'net/l1/bias/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE', 'net/l1/bias/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE', 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE', 'net/l1/kernel/.OPTIMIZER_SLOT/optimizer/m/.ATTRIBUTES/VARIABLE_VALUE', 'net/l1/kernel/.OPTIMIZER_SLOT/optimizer/v/.ATTRIBUTES/VARIABLE_VALUE', 'optimizer/beta_1/.ATTRIBUTES/VARIABLE_VALUE', 'optimizer/beta_2/.ATTRIBUTES/VARIABLE_VALUE', 'optimizer/decay/.ATTRIBUTES/VARIABLE_VALUE', 'optimizer/iter/.ATTRIBUTES/VARIABLE_VALUE', 'optimizer/learning_rate/.ATTRIBUTES/VARIABLE_VALUE', 'save_counter/.ATTRIBUTES/VARIABLE_VALUE', 'step/.ATTRIBUTES/VARIABLE_VALUE']
So if you're interested in the value of net.l1.kernel you can get the value with the following code:
key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'
print("Shape:", shape_from_key[key])
print("Dtype:", dtype_from_key[key].name)
Shape: [1, 5] Dtype: float32
It also provides a get_tensor method allowing you to inspect the value of a variable:
reader.get_tensor(key)
array([[4.5719748, 4.6099544, 4.931875 , 4.836442 , 4.8496275]], dtype=float32)
List and dictionary tracking
As with direct attribute assignments like self.l1 = tf.keras.layers.Dense(5), assigning lists and dictionaries to attributes will track their contents.
save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy()
You may notice wrapper objects for lists and dictionaries. These wrappers are checkpointable versions of the underlying data-structures. Just like the attribute based loading, these wrappers restore a variable's value as soon as it's added to the container.
restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy()
ListWrapper([])
The same tracking is automatically applied to subclasses of tf.keras.Model, and may be used for example to track lists of layers.
Summary
TensorFlow objects provide an easy automatic mechanism for saving and restoring the values of variables they use. |
The two highest numbered boards go on auction on eBay at 10pm. More boards will be auctioned, with board number one the last to be offered. Details here.
Saturday, 31 December 2011
You can start bidding for a numbered Raspberry Pi beta board tonight!
The two highest numbered boards go on auction on eBay at 10pm. More boards will be auctioned, with board number one the last to be offered. Details here.
The two highest numbered boards go on auction on eBay at 10pm. More boards will be auctioned, with board number one the last to be offered. Details here.
Sunday, 25 December 2011
The Raspberry Pi team are taking a well-earned break over Christmas, but they've left a few more snippets to keep us happy. New photos, a facelift for the Forum, and an informative update to the About page.
Let's hope the testing of the beta boards continues to go well. I'm still hoping for the limited edition auction before the end of the year, and a chance for everyone to buy one in in January!
Let's hope the testing of the beta boards continues to go well. I'm still hoping for the limited edition auction before the end of the year, and a chance for everyone to buy one in in January!
Saturday, 24 December 2011
Raspberry Pi beta boards booting on the RasbPi website. Amazing Video performance.
It looks as if there's an easy-to-fix issue with the current layout. Liz has promised us more info tomorrow.
The testing is going well, and with any luck we'll see the first few boards auctioned this year. Fantastic!
Congratulations, Pi cooks. Have an excellent Christmas.
It looks as if there's an easy-to-fix issue with the current layout. Liz has promised us more info tomorrow.
The testing is going well, and with any luck we'll see the first few boards auctioned this year. Fantastic!
Congratulations, Pi cooks. Have an excellent Christmas.
Friday, 23 December 2011
Friday, 16 December 2011
The Gertboard is coming - a capable, inexpensive expansion board for the soon-to-ship Raspberry Pi.
The Gertboard is will make it much simpler to interface the Raspberry Pi to the outside world. It provides access to the Pi's GPIO pins, and can include a motor driver.
Gert van Loo of Fen Logic Ltd, the board's designer, will publish all the design documents once the design is stable. Raspberry Pi expect to sell a bare board in their shop. That means that you'll need to buy the components yourself, program the on-board PIC if required, and solder the components.
Since many of the components are SMT (surface mount) you will need to be a proficient constructor to assemble your own board. Gert hopes that someone will start making per-build boards. I suspect there will be quite a market once the Pi starts to shop in volume next year.
The Gertboard is will make it much simpler to interface the Raspberry Pi to the outside world. It provides access to the Pi's GPIO pins, and can include a motor driver.
Gert van Loo of Fen Logic Ltd, the board's designer, will publish all the design documents once the design is stable. Raspberry Pi expect to sell a bare board in their shop. That means that you'll need to buy the components yourself, program the on-board PIC if required, and solder the components.
Since many of the components are SMT (surface mount) you will need to be a proficient constructor to assemble your own board. Gert hopes that someone will start making per-build boards. I suspect there will be quite a market once the Pi starts to shop in volume next year.
Thursday, 15 December 2011
I've just started using the Open Bench Logic Sniffer.
It's a very capable bit of open source hardware and it's supported by a highly functional Java client. Together they give you have access to the features of a commercial logic analyser at a fraction of the price. In the UK the board costs just under £50 including VAT. The software is free.
There is one minor pitfall, though. The product's home page has a prominent link to the original SUMP client. That takes you to an old version which is no linger actively maintained - indeed I could not even fire it up.
There is an actively maintained alternative which worked straight out of the box. There are also a number of firmware upgrades. I don't yet know if I need to apply any of these; I'll report when I do.
The sniffer needs at least one accessory: a cable which connects the board headers to the circuit under test.
A single cable gives you 8 connections; if you want to use 16 channels you will need two of them.
You'll also need a USB cable with a micro connector, which is not provided.
The sniffer and cables are available form seedstudios in Hong Kong. In the US you can buy from Sparkfun or the Gadget Factory. In the UK you can find them at SK Pang and ProtoPic.
Wednesday, 7 December 2011
Farnell delivered my Beaglebone this morning. The bone is powered up and running; I'll describe the startup process in a later post.
The Beaglebone is compact; the board is about 2.1" by 3.4", although the Ethernet socket protrudes by another 0.1".
Unlike its big brother, the Beaglebone has no HDMI socket, so you cannot intereface the bare board with a DVD-I monitor. The bone will soon be supported by a number of
However, the Beaglebone offers another exciting way to interact with the board.
Once you've connected the board to your local network, you can open the board's cloud9 browser-based JavaScript development environment on your PC or Laptop. cloud9 allows you to create and run JavaScript applications on the Beaglebone.
The environment comes with a handy demo (blinkled.js) which will flash one of the board LEDs on and off.The sample code below gives you a feeling for how easily you can create programs that will interact with the bone's hardware.
The goal of the bonescript environment is to provide libraries that are simple to use (like the Arduino libraries) through a development environment that needs no installation. The libraries are a work in progress, but there's already enough to get the first wave of explorers off to a flying start.
I love the Beaglebone. The documentation is still a bit on the scanty side, but the hardware, and the development environment, are inspiring. I'm looking forward to getting to know this great product much better over the next few days.
One word of warning - it looks as if Farnell have already sold out of their first shipment, and I suspect the next batch will also sell out fast. You can place an order here.
The Beaglebone is compact; the board is about 2.1" by 3.4", although the Ethernet socket protrudes by another 0.1".
Unlike its big brother, the Beaglebone has no HDMI socket, so you cannot intereface the bare board with a DVD-I monitor. The bone will soon be supported by a number of
Capes( the Beaglebone equivalent of Arduino's shields), and one of these will provide DVD-I support. As you'd expect, you can connect to your board via the Serial USB link. You can also use SSH or VNC to provide a console or GUI session over Ethernet.
However, the Beaglebone offers another exciting way to interact with the board.
Once you've connected the board to your local network, you can open the board's cloud9 browser-based JavaScript development environment on your PC or Laptop. cloud9 allows you to create and run JavaScript applications on the Beaglebone.
The environment comes with a handy demo (blinkled.js) which will flash one of the board LEDs on and off.The sample code below gives you a feeling for how easily you can create programs that will interact with the bone's hardware.
var bb = require('./bonescript');
var ledPin = bone.P8_3;
var ledPin2 = bone.USR3;
setup = function() {
pinMode(ledPin, OUTPUT);
pinMode(ledPin2, OUTPUT);
};
loop = function() {
digitalWrite(ledPin, HIGH);
digitalWrite(ledPin2, HIGH);
delay(1000);
digitalWrite(ledPin, LOW);
digitalWrite(ledPin2, LOW);
delay(1000);
};
bb.run();
The goal of the bonescript environment is to provide libraries that are simple to use (like the Arduino libraries) through a development environment that needs no installation. The libraries are a work in progress, but there's already enough to get the first wave of explorers off to a flying start.
I love the Beaglebone. The documentation is still a bit on the scanty side, but the hardware, and the development environment, are inspiring. I'm looking forward to getting to know this great product much better over the next few days.
One word of warning - it looks as if Farnell have already sold out of their first shipment, and I suspect the next batch will also sell out fast. You can place an order here.
Tuesday, 6 December 2011
Raspberry Pi have posted some higher-resolution pictures of bare (unpopulated) boards from their first batch. The images are fascinating, not least because they show the scarily compact BGA connections which will link the main Broadcom processor to the board. There are hundreds of pads in a footprint the size of a postage stamp.
The post makes clear that
The post makes clear that
these are not the boards as customers will see them; the board you get will be sold fully populated, with components soldered on. These images are there to whet our appetite and keep us informed about the march towards shipment.
Saturday, 3 December 2011
A few days ago I found an old friend - an Acorn Microcomputer dating back to the late 1970s. You might know it by its later name of Acorn System 1.
It's arguably the great-great-grandfather of Raspberry Pi ( though several orders of magnitude less powerful). Acorn fathered ARM, whose design lies at the heart of RasbPi's processor; the processor is made by Broadcom, who also trace their ancestry back to Acorn.
Mike Colishaw's website.
I've done nothing with this early micro for years, and I'd pretty much decided to give it to a museum. I suspect it still works, though I haven't tried to apply power yet.
Then I read about the latest batch of RasbPi boards and the plans to auction some, and wondered if the Raspberry Pi project would care to auction this fore-runner to raise more money for the project.
If you would like to own this bit of computing history, head over to the Raspberry Pi forum and encourage the RasbPi team to get in touch and auction it!
It's arguably the great-great-grandfather of Raspberry Pi ( though several orders of magnitude less powerful). Acorn fathered ARM, whose design lies at the heart of RasbPi's processor; the processor is made by Broadcom, who also trace their ancestry back to Acorn.
Mike Colishaw's website.
I've done nothing with this early micro for years, and I'd pretty much decided to give it to a museum. I suspect it still works, though I haven't tried to apply power yet.
Then I read about the latest batch of RasbPi boards and the plans to auction some, and wondered if the Raspberry Pi project would care to auction this fore-runner to raise more money for the project.
If you would like to own this bit of computing history, head over to the Raspberry Pi forum and encourage the RasbPi team to get in touch and auction it!
Thursday, 1 December 2011
Raspberry Pi have just released pictures of the first batch of PCBs for Raspberry Pi.
This amazing £25 system on a chip should be shipping in low volume before the end of the year. It's built up a huge following already, and the initial production run of 10,000 is likely to sell out fast.
The pictures of the PBC layout were amazing, and the actual PCB is just as remarkable.
The computer has the potential to change IT education, in this country and elsewhere, but much of its value lies in the way that it's motivated the community.
The open source movement has shown that there is a huge pool of talented, motivated people who like to share their work. Raspberry Pi is a focus for those of us who would like to share with the next generation of technologists.
It's a product for optimists.
This amazing £25 system on a chip should be shipping in low volume before the end of the year. It's built up a huge following already, and the initial production run of 10,000 is likely to sell out fast.
The pictures of the PBC layout were amazing, and the actual PCB is just as remarkable.
The computer has the potential to change IT education, in this country and elsewhere, but much of its value lies in the way that it's motivated the community.
The open source movement has shown that there is a huge pool of talented, motivated people who like to share their work. Raspberry Pi is a focus for those of us who would like to share with the next generation of technologists.
It's a product for optimists.
Arduino have released version 1.0 of the Arduino development environment.
There's a lot of new functionality in the new release, and there are some significant changes.
This blog post talks about an earlier release candidate, but presumably still applies. At the time of posting, the release notes stop at release 23, but that may be fixed by the time you read this.
A quick test showed some of the new features, and some of the issues involved in migrating existing projects.
The new IDE looks subtly different. Below you'll see a 'verify' button (shown as a tick) which you can use to compile a sketch without trying to upload it. I found that handy when checking the compatibility of existing code.
You'll find changes in several libraries. In most cases these are improvements in functionality. Examples:
There's a lot of new functionality in the new release, and there are some significant changes.
This blog post talks about an earlier release candidate, but presumably still applies. At the time of posting, the release notes stop at release 23, but that may be fixed by the time you read this.
A quick test showed some of the new features, and some of the issues involved in migrating existing projects.
The new IDE looks subtly different. Below you'll see a 'verify' button (shown as a tick) which you can use to compile a sketch without trying to upload it. I found that handy when checking the compatibility of existing code.
You'll find changes in several libraries. In most cases these are improvements in functionality. Examples:
the Serial library is now asynchronous
Serial now contains functions for parsing input data
the String class is now more efficient and robust
the Ethernet library now supports DHCP and DNS
the SD library (which you can use for reading from/writing to SD cards) now supports multiple open files
However, some changes will break existing code. For instance
send() and receive() in the Wire library have been replaced by write() and read()
the effect of Serial.print() on a byte argument has changed. It's now more consistent with other libraries, but existing code will break. The old behaviour can be reproduced by using Serial.write() instead
Finally, the extension for Arduino sketches has changed from pde to ino. The new IDE will recognise your old sketches, though, and when you save them a pop-up will remind you of the change before asking if you want to save the file with the new extension. That's handy, as it gives you an easy way to keep track of which sketches you've migrated.
There are many other changes listed in the blog. I'll update this post if I come across anything crucial.
Wednesday, 30 November 2011
mbed lpc11u24 beta kit
Fire up VirtualBox
Start up a Windows XP VM
Plug a USB connector into the mbed
Tell VirtualBox to give the VM access to that USB device and
Curse at the fact that I'm restricted to the much slower version 1.1 of the USB protocol
I've now switched to a Java package which runs under Ubuntu. Details on the mbed site.
The next step should be to do some timings, but I won't be able to run at full speed until the chaps at mbed have finished porting one of the libraries I need to the new beta mbed LPC11U24.
Let's Make Robots - a site for robot builders to show off their work, exchange ideas and learn from others. It looks like a great resource with a friendly community.
If you're a robot-builder, take a few minutes to register and post details of your robotic creations.
If you're a robot-builder, take a few minutes to register and post details of your robotic creations.
Soft robots keep popping up (or slithering along) all over the place. I'm beginning to think I ought to take a closer look at the technology in my copious free time. I wonder how well a combined crawler/gripper would function underwater.
I've just realised that I placed a couple of posts on the BeagleBoard over on my Application development blog, when I meant to post them here.
I won't cross-post, but here are the links:
Busy with the BeagleBoard and Blogging from the Beagleboard. Together they give a bit of background to some of my later posts.
Tuesday, 29 November 2011
While working on the mbed logic analyser, I realised I had run out of USB ports on my workstation. A quick search on Amazon suggested this D-Link DUB-H7 7-port USB 2.0 Hub. I ordered one and it arrived today.
Nothing earthshaking, but it works well, looks nice and is reasonably priced. Recommended.
Nothing earthshaking, but it works well, looks nice and is reasonably priced. Recommended.
Monday, 28 November 2011
black Friday Pololu order arrived this morning. I'm delighted with the purchase, and one item in particular has blown me away.
It's a MiniMU-9, which combines a triple-axis gyro, a triple-axis accelerometer and a triple-axis magnetometer. In other words, it measures all the data you need to work out the position and orientation of a mobile robot. And it's tiny.
To give you a feeling of its size, the connector holes in the image are 0.1" apart, and it weighs just under a gram.
Now if only I had a small, low-cost lightweight, Linux-capable board to control it, I could build an autonomous blimp. But that's just Pi in the Sky :)
Seriously, the Raspberry Pi (described in the Register article linked to above) is well-suited to the job of managing an autonomous blimp. It's true that the gumstix is even smaller than the Pi, and available this month rather than next; but it costs a
Meanwhile I have projects stacked up that will keep me occupied for the foreseeable future. I have more work to do on my beta mbed project, and I need to post more Python I2C code for the Beagleboard. Meanwhile TrackBot is still waiting for a working radio link.
Christmas is going to be busy this year.
It's a MiniMU-9, which combines a triple-axis gyro, a triple-axis accelerometer and a triple-axis magnetometer. In other words, it measures all the data you need to work out the position and orientation of a mobile robot. And it's tiny.
To give you a feeling of its size, the connector holes in the image are 0.1" apart, and it weighs just under a gram.
Now if only I had a small, low-cost lightweight, Linux-capable board to control it, I could build an autonomous blimp. But that's just Pi in the Sky :)
Seriously, the Raspberry Pi (described in the Register article linked to above) is well-suited to the job of managing an autonomous blimp. It's true that the gumstix is even smaller than the Pi, and available this month rather than next; but it costs a
lotmore.
Meanwhile I have projects stacked up that will keep me occupied for the foreseeable future. I have more work to do on my beta mbed project, and I need to post more Python I2C code for the Beagleboard. Meanwhile TrackBot is still waiting for a working radio link.
Christmas is going to be busy this year.
Friday, 25 November 2011
LPC11U2 mbed
I can already send the state of an 8-bit wide set of mbed inouts to the PC over a USB HID connection. I have now idea how fast the final version will be, but it should be able to analyse fast (400kHz) I2C and some SPI interactions in real time.
Hendrik Lipka (another mbed user) suggested that I might adapt the open-source SUMP client so that it can capture, analyse and display data on the PC. That's an exciting idea which I will certainly follow up.
Thursday, 24 November 2011
Qt cross-platform framework, you just might get a
(If you're unfamiliar with this credit-card sized wonder, this video will give you a great introduction).
Nokia are looking for people who will port software, develop apps, and test and improve the Qt 5 Linux stack. They are so keen that they have funded 400 vouchers for Raspberry Pi.
If you have a visible track record as a Qt developer, and a good idea for a relevant project, head off here to find out how to apply for a voucher. Last time I looked there were 65 applications for those 400 vouchers, so hurry along!
freeRaspberry Pi Linux-based microcomputer courtesy of Nokia.
(If you're unfamiliar with this credit-card sized wonder, this video will give you a great introduction).
Nokia are looking for people who will port software, develop apps, and test and improve the Qt 5 Linux stack. They are so keen that they have funded 400 vouchers for Raspberry Pi.
If you have a visible track record as a Qt developer, and a good idea for a relevant project, head off here to find out how to apply for a voucher. Last time I looked there were 65 applications for those 400 vouchers, so hurry along!
mbed nxp LPC11U24
The photo show the mbed pretending to be a USB mouse.
I'm looking forward to getting stuck into my logic analyser project; once that is finished I may hook the mbed up to my pseudo-microwriter and turn it into a USB device.
new mbed was on its way. It's still in beta, but I was lucky enough to be selected for the beta program and I am starting work on my project.
First impressions are very positive: the instructions on the mbed site are very clear. It took about two minutes to use the cloud-based compiler to create the obligatory led blinking program and then drag the binary file over to the mbed.
Now to work through the examples and test it out as a USB device.
First impressions are very positive: the instructions on the mbed site are very clear. It took about two minutes to use the cloud-based compiler to create the obligatory led blinking program and then drag the binary file over to the mbed.
Now to work through the examples and test it out as a USB device.
Pololu are offering massive Black Friday discounts on a range of their robotics products.
Some of the bargains:
Some of the bargains:
33% off the m3pi mbed- based mobile robot (you will need to get your own mbed)
50% off a pair of wixels
30-60% off various motor controllers
40% off the MinIMU-9 Gyro, Accelerometer, and Compass
The last item looks perfect for an autonomous vehicle/aircraft navigation system.
There are also flat discounts available, depending on the size of your order; these can be combined with the individual product discounts.
I just saved $150 on a $350 order.
One thing to take into account: if you're buying from outside the USA you will have to pay local taxes and a processing fee to the shipping agent when the goods arrive.
The offer is time-limited, and will expire at 11:59 PM PST on Monday, November 28 (7:59 AM GMT on Tuesday, November 29).
Wednesday, 23 November 2011
Yesterday Google hosted a qualifying tournament for the Bay Area First Lego League (FLL) at their Mountain View Headquarters.
Imagine the excitement of the 9 to14 year-olds as they arrived at Google headquarters. They will remember the event for the rest of their lives, and who knows what they will achieve in the years to come.
At a time when news is full of economic gloom, it's great to see the excitement and creativity of the young, who are living at the start of a golden age of maker technology.
Imagine the excitement of the 9 to14 year-olds as they arrived at Google headquarters. They will remember the event for the rest of their lives, and who knows what they will achieve in the years to come.
At a time when news is full of economic gloom, it's great to see the excitement and creativity of the young, who are living at the start of a golden age of maker technology.
image courtesy GreenArrays, Inc.
Robotics enthusiasts will probably be familiar with the Parallax propeller, an inexpensive multi-cored processor that is desgined for hobbyist use. The propeller is an affordable introduction to parallel processing, but the GA144 offers a couple of orders of magnitude more processing power and is very energy-efficient.
The chairman of GreenArrays is Chuck Moore, the inventor of Forth. The GA144 supports arrayForth, a powerful language for parallel computing.
Tuesday, 22 November 2011
new Raspberry Pi stickers which sold out as soon as they were on offer.
The Raspberry Pi team ordered a second batch, but these took longer than expected to arrive.
It's heartening to read people's reactions to the apology for the delay.
You can see that there is huge commitment to the project from supporters around the world.
Raspberry Pi is potentially world-changing, and I think the foundation will be astounded and delighted by the strength of the effort to create free software to help this fantastic product achieve its promise.
And remember - while we're waiting for the Pi itself o get through QA, you can support the foundation by ... buying some stickers :)
The Raspberry Pi team ordered a second batch, but these took longer than expected to arrive.
It's heartening to read people's reactions to the apology for the delay.
You can see that there is huge commitment to the project from supporters around the world.
Raspberry Pi is potentially world-changing, and I think the foundation will be astounded and delighted by the strength of the effort to create free software to help this fantastic product achieve its promise.
And remember - while we're waiting for the Pi itself o get through QA, you can support the foundation by ... buying some stickers :)
I've just heard that I have been accepted for the beta test program for the new mbed cortex-m0.
I'm going to build a logic analyser which transfers data via the USB interface to a PC for analysis and display.
While one can do this using the existing mbed and serial communication's, I think the USB approach should support significantly higer data rates.
I've set up a new project page in my notebook at mbed.org. Comments and suggestions welcome!
I'm going to build a logic analyser which transfers data via the USB interface to a PC for analysis and display.
While one can do this using the existing mbed and serial communication's, I think the USB approach should support significantly higer data rates.
I've set up a new project page in my notebook at mbed.org. Comments and suggestions welcome!
Saturday, 19 November 2011
Bluebot-L
A few months ago I started work on a wheeled robot based on Veroduino - an Arduino clone build using Vero strip board.
I had a wheeled base which contained a couple of geared motors. They drew a fairly low current so I decided to use a TC4427A h-bridge controller on a shield-like board that sat atop the ATMega326 micro controller. I decided to use separate power supplies for the controller and the motors; I bought a couple of battery holders, each taking 4 AA cells.
I used foam plastic for the base; it's easy to cut and drill, and it's lightweight, but strong enough for the job (so long as it isn't crushed). Construction was rapid and I soon got to the point where I could try it out on a simple line-following exercise.
Disaster! The batteries at each end contributed to a really high moment of inertia (MoI). When the robot turned, the high MoI meant that it over-corrected and hunted around the line without ever settling down. It swung rapidly from side to side like a boat in a raging stream.
The hunting was made worse by the length of the base; a small turn to the left or right moved the light sensors by a couple of inches, so it was surprising that it followed the line at all. Sometimes it didn't, and raced off looking for something new to follow.
Fun to watch but not very satisfactory.
I could have compensated for the MoI by tuning the software, but it was obvious that this was a flawed design. So Bluebot-L has remained in a project box, waiting until I have time to sort out a better designed base.
Trackbot
I'll post more about Trackbot later this week-end.
Friday, 18 November 2011
Windows for Devices just published an article suggesting that a recently launched product from Embest and Farnell was taking on the BeagleBoard xM.
The DM3730-EVK from Farnell is based on a TI DM3730 chip - a Corterx A8 with DSP capability. It comes with 512Mb of flash and 512Mb of RAM; the kit also includes a 4.3" screen. Out of the box it boots linux or WinCE. It's priced at £353.00 + VAT.
The BeagleBoard has 512 Mb of RAM and the same TI chip; LED screens are separately available, as is WinCE. It costs £124.80 + VAT. A touch screen might add another £120 to that.
However, there's a more fundamental difference between the products.
The DM3730-EVK is clearly aimed at the commercial market. It seems to be a stable design which you could sensibly use as the basis for pilot (small-scale) production of a commercial product.
The Beagleboard explicitly warns that it's not intended for production use, largely because the team reserve the right to change the design whenever they come up with improvements. Great for prototyping or proof-of-concept, and inexpensive enough for anyone who wants to try out a new idea.
Both are available from Farnell:
The DM3730-EVK from Farnell is based on a TI DM3730 chip - a Corterx A8 with DSP capability. It comes with 512Mb of flash and 512Mb of RAM; the kit also includes a 4.3" screen. Out of the box it boots linux or WinCE. It's priced at £353.00 + VAT.
The BeagleBoard has 512 Mb of RAM and the same TI chip; LED screens are separately available, as is WinCE. It costs £124.80 + VAT. A touch screen might add another £120 to that.
However, there's a more fundamental difference between the products.
The DM3730-EVK is clearly aimed at the commercial market. It seems to be a stable design which you could sensibly use as the basis for pilot (small-scale) production of a commercial product.
The Beagleboard explicitly warns that it's not intended for production use, largely because the team reserve the right to change the design whenever they come up with improvements. Great for prototyping or proof-of-concept, and inexpensive enough for anyone who wants to try out a new idea.
Both are available from Farnell:
Soft actuator: DRL, CSAIL, MIT
soft robotwhich uses hydraulics to move around. (The post suggests there are chemical reactions involved, but that doesn't seem to be the case).
The picture shows an actuator like those used to move the robot.
If that's not scary enough, you can watch a video.
I find the snail-like creature rather scary; it wouldn't be out of place in Doctor Who. Luckily it moves so slowly that escape would be fairly simple, unless you're surrounded by the beasts!
The original poster cited a paper on the Distributed Robotics Laboratory Wiki. You'll find many other fascinating robotics projects described there.
It's a great place to look for inspiration if you want to start a robotics project that would really stand out.
Thursday, 17 November 2011
mbed (courtesy of NXP)
I was lucky enough to be selected to do a road-test for Farnell's element14. The mbed is an amazing product.
One of its great strengths is the large support library which includes code developed by the mbed team and by its users.
Now a second mbed is in beta-test. Based on the NXP LPC11U24, it is designed for prototyping USB devices, battery powered applications, and 32-bit ARM Cortex-M0 designs. I'm hoping to get my hands on one soon and will report in due course.
The original mbed is still selling well, and is available from Farnell for just under £40 + VAT.
Wednesday, 16 November 2011
People have been speculating about what young users will do with something like Raspberry Pi, the $25 dollar computer. Here's a clue from TED.
I've had lunch with two friends in the last couple of days. Both have children at school. Both
really warmed to the idea of Raspberry Pi. They're optimistic that it would encourage their children to program, rather than just play games andhack visit social networking sites.
I can guess what the young may be getting from Santa this year.
I've had lunch with two friends in the last couple of days. Both have children at school. Both
really warmed to the idea of Raspberry Pi. They're optimistic that it would encourage their children to program, rather than just play games and
I can guess what the young may be getting from Santa this year.
Oomlout have come up with a neat design for a (nearly) free project box for Arduino. It's printed on 210 gsm card stock; you print it, cut it, fold it, and voilà - a simple but functional home for your Arduino creation.
I'm working on a project right now that could use one of these. I'll pop out for some card stock and have a go.
I'm working on a project right now that could use one of these. I'll pop out for some card stock and have a go.
Update: My local Stationers (WH Smith) have packs of white 220 gsm card stock for a little over £3 , and they also have mixed packs of coloured card. So you can make yourself a coloured project box if you want.
I've pushed my Beagleboard Python I2C library up to GitHub, along with the sketch I used to create an Arduino I2C slave that drives an LCD. (A slave driver?)
It's in a public repository at git@github.com:romilly/beagleboard-i2c.git
This is alpha code; the API will change; the code may not work and/or may do bad things. Use it at your own risk.
More coming soon. (Stop press: that will include a non-blank README )
If you aren't familiar with Git/GitHub, there is good reference material here.
It's in a public repository at git@github.com:romilly/beagleboard-i2c.git
This is alpha code; the API will change; the code may not work and/or may do bad things. Use it at your own risk.
More coming soon. (Stop press: that will include a non-blank README )
If you aren't familiar with Git/GitHub, there is good reference material here.
Tuesday, 15 November 2011
I've hooked it up to an Arduino; it's easy to drive and looks very professional.
I stumbled across Robot Electronics while searching for LCD suppliers.
It turns out that they are the on-line trading arm of Devantech, whose Ultrasonic Rangers are widely used in Robotics Projects.
Their full range of products is much wider than that, and their on-line shop is well worth a visit.
Raspberry Pi
It's fascinating (and rather scary) to see how they have managed to compress a linux-capable System-on-a-chip onto a board the size of a credit card.
The first boards are expected in December; meanwhile, if you'd like to help test their online shop you can buy a Raspberry Pi sticker.
Monday, 14 November 2011
Arduino Bots and Gadgets is a step-by-step guide to building, adapting and designing prototypes based on the Arduino family. It's published by Make magazine - a member of the O'Reilly Family.
Like Make, the book delights the eye. Readers are drawn into a programme of exploration and discovery which will help them develop the skills they need to start prototyping their own inventions using the Arduino Platform.
Like Make, the book delights the eye. Readers are drawn into a programme of exploration and discovery which will help them develop the skills they need to start prototyping their own inventions using the Arduino Platform.
The authors write clearly and the projects look fun to build:
a Stalker Guard
an Insect Robot
an Interactive Painting
a Boxing Clock
a Remote for a Smart Home, and
a Soccer Robot
I have one niggle: the first project uses an Arduino Nano, rather than the Uno which is mentioned at the start of the book. Readers might feel a little miffed if they'd bought the Uno only to find they needed another variant.
That's a minor complaint, and my overall reaction to the book is very positive. The book is clear, attractive, authoritative, and (at nearly 300 pages) excellent value.
If you want something more substantial than a basic experimenter's guide this book is a great buy. I got the electronic version, but it's also available in print. If someone you know wants to make a start on serious Arduino development this could be an ideal Christmas present!
Sunday, 13 November 2011
We're seeing a great and welcome upsurge of interest in introducing school children to programming, rather than just teaching them how to use Powerpoint, Word and Excel.
Jason Gorman got programmers and teachers together at Bletchley Park this Summer to look at ways of bringing programming into Schools
RaspberryPi, inspired by the success of the BBC micro in the '80s is aiming to fire up the next generation of school-age programmers
the Arduino, along with introductory kits like Oomlout's experimenters kit has made the Internet of Things accessible to beginners of any age
MIT's Scratch has children and adults creating their own interactive stories, games, music and art
And now there's Minibloq.
Minibloq
The Arduino blog recently announced Minibloq v0.8 - a visual programming environment for the Arduino. It's aimed at
the creative at heart non-programmers, and I couldn't resist a play. (I leave it to those with whom I've developed software to decide whether or not I meet the criterion.)
The software has been developed under Windows, which is appropriate for the intended audience. My study is normally a windows-free zone; I could have installed Wine (the software that allows you to run a lot of Windows Software under Linux), but it seemed simpler to fire up a VirtualBox VM running Windows XP.
Once I'd done so, it took a minute or so to download the 72Mbyte package, and a couple of minutes to write a simple led flashing program. I have to confess I didn't read the documentation, but the interface is intuitive enough that I didn't need to.
There are plenty of examples on the website, and a demo video on the blog. Minibloq is fun, and I intend to play with it some more over the next few days.
One of the features I like best is that you can ask to see the code that's being generated as you develop your program. The visual interface is very accessible, but when you're ready Minibloq will help you transition painlessly to text-based programming.
Congratulations to the Minbloq team. Great work. We'll hear more about this software.
A Curriculum of Toys- an article in Make Magazine by Saul Griffith. (Thanks to +Limor Fried for the link).
Griffith examines the ways in which children can learn from toys. He lists the skills that great toys can help to develop, and looks at activities which foster those skills.
Play is a core part of learning, and good toys promote good play.
On Sunday mornings, years ago, my daughter Alex and I would steal quietly downstairs to listen to music and build things. We started when she was two, building things with sticklebricks, then moved on to duplo and LEGO.
She and I are convinced that those Sunday mornings played a part in fostering her Maker skills. If you want to see what she's made with them, take a look at Let's Get Prehistoric.
Some people dismiss play; they consider it at best a break, at worst a waste of time. Saul Griffith takes the opposite view. Play helps us, as children and as adults, to develop, to interact with others, to find out who we are and to express our natural creativity.
Playing with good toys is a great educator. Saul Griffith's article spells that out in concrete, practical terms.
Saturday, 12 November 2011
pre-order the BeagleBone
at Farnell UK, or ask to be notified when it's available. With availability expected in mid-November, you shouldn't have long to wait.
The BeagleBone is a hardware hacker's dream, with a compact format, low power consumption, plenty of I/O and enough processing power for serious real-time computation.
If you want to interface with the world outside, connecting to sensors and drive LEDs, motors and relays, the BeagleBone has the capacity to do it.
The BeagleBone has stirred up a lot of interest, and the first shipment is likely to sell out fast; if you want to get your hands on one, pre-order today!
Friday, 11 November 2011
boarduino over I2C. I'm running my simple Python script on the BeagleBoard and text is appearing on the boarduino's LCD.
I switched to the boarduino because I had one sitting in a drawer, and I knew from earlier this week that a 16MHz ATMega328 could handle the BeagleBoard's fast (400kHz) I2C.
The boarduino is designed for use with a prototyping board, and it has a very compact form factor. I'll probably transfer boarduino and LCD to a semi-permanent strip-board home.
The BeagleBoard can rest for a day or two; I'm working on TrackBot, sorting out the Radio problem that I encountered earlier in the week and adding a Pololu IR proximity sensor to help avoid damaging collisions. After that it will be time for a retrospective review, and I can start planning B4 - the BeagleBoard Based 'Bot.
I switched to the boarduino because I had one sitting in a drawer, and I knew from earlier this week that a 16MHz ATMega328 could handle the BeagleBoard's fast (400kHz) I2C.
The boarduino is designed for use with a prototyping board, and it has a very compact form factor. I'll probably transfer boarduino and LCD to a semi-permanent strip-board home.
The BeagleBoard can rest for a day or two; I'm working on TrackBot, sorting out the Radio problem that I encountered earlier in the week and adding a Pololu IR proximity sensor to help avoid damaging collisions. After that it will be time for a retrospective review, and I can start planning B4 - the BeagleBoard Based 'Bot.
Update: The first batch of stickers sold out in 24 hours!
More are on their way, and you can pre-order the stickers (but
notthe Pi!) from the website.
A lot of people are getting very excited about the imminent launch of the Raspberry Pi. It's an ultra-low cost Linux-capable board based on an ARM11 core, and it's due to go on sale in December.
If you want a quick overview of the Pi and what it can do, this video on the foundation website is a good place to start.
The Raspberry Pi foundation expects strong early demand from hobbyists, but the longer-term aim is to attract new users at school level. Since the Pi is to sell for about £20-£25 that looks realistic.
You
can't pre-order the Pi, and should beware of sites offering the Pi: they are trying to part you from your money. You
canhelp to test out the new on-line shop and contribute to this very worthwhile project by purchasing a Raspberry Pi sticker. You'll end up paying a little over £3, most of which will go to support the foundation.
If you want to use an ARM-based Linux board for robotics or test gear applications you are spoilt for choice at the moment. There's the Beagleboard, the mbed, the LPCxpresso, and the gumstix range. Soon there will be the BeagleBone and Rapsberry Pi. While their markets overlap a little, each board has its own unique advantages. Once I've got my hands on the BeagleBone and Pi I will attempt a feature-based comparison and post the results.
While you're waiting, do register at the Raspberry Pi site and
buy a sticker!
Thursday, 10 November 2011
A lot of useful work today, and a valuable discovery, but I haven't quite got to where I want to be yet.
Atmel ATMega8 on a DT107 board from Dontronics as the middleman, sitting between the BeagleBoard and the LCD display I want to drive.
It took a while to wire up the DT107 to the LCD. To find out about what connects to what, I had to look at the Oomlout instructions, an Arduino Pinout, the ATMega pinout, the DT107's schematic and the layout of the SimmBus.
I'm
The DT107 board I'm using has an ATMega8 with an 8Mhz crystal. It just may not be able to run I2C at 400kHz, which is what the BeagleBoard I2C-2 interface uses.
On the plus side, I discovered that you can use the Arduino development environment with an ISP to program a sketch directly onto an AVR. So there's no need to install a bootloader! That simplifies things considerably. I'm using a USBTinyISP from adafruit; it's supported by AVRDude, which Arduino uses under the covers, and they work really well together.
Tomorrow I'll solder up another DT107 with a 16MHz crystal and an ATMega328. If I've got the wiring from board to LCD correct, we should be away to the races.
Atmel ATMega8 on a DT107 board from Dontronics as the middleman, sitting between the BeagleBoard and the LCD display I want to drive.
It took a while to wire up the DT107 to the LCD. To find out about what connects to what, I had to look at the Oomlout instructions, an Arduino Pinout, the ATMega pinout, the DT107's schematic and the layout of the SimmBus.
I'm
fairlysure I've got the connections correct but the Beagleboard is complaining about an error on the I2C bus. That part of the wiring is easy to check, so I think the probblem is more fundamental.
The DT107 board I'm using has an ATMega8 with an 8Mhz crystal. It just may not be able to run I2C at 400kHz, which is what the BeagleBoard I2C-2 interface uses.
On the plus side, I discovered that you can use the Arduino development environment with an ISP to program a sketch directly onto an AVR. So there's no need to install a bootloader! That simplifies things considerably. I'm using a USBTinyISP from adafruit; it's supported by AVRDude, which Arduino uses under the covers, and they work really well together.
Tomorrow I'll solder up another DT107 with a 16MHz crystal and an ATMega328. If I've got the wiring from board to LCD correct, we should be away to the races.
The Arduino sketch is a slightly tweaked merge of two examples: the LCD driver and the I2C slave.
I've written the BeagleBoard driver in Python. It's really simple.
It uses my generic I2C class, which is also very straightforward.
For now, here's the Python code for the LCD Driver.
#! /usr/bin/python
import i2c, time
FF = '\f' # Form Feed
class LCD:
def __init__(self, address):
self.lcd = i2c.I2C_device(0x04)
def pr(self, ch):
self.lcd.begin_transmission()
self.lcd.send(ord(ch))
self.lcd.end_transmission()
def prints(self, string):
for ch in string:
self.pr(ch)
def clear(self):
self.pr(FF)
lcd = LCD(4)
lcd.clear()
lcd.prints('Hi from BB')
time.sleep(2)
lcd.clear()
lcd.prints('Hello again')
It uses my generic I2C class, which is also very straightforward.
I'll put all the code up on GitHub real soon now.
Wednesday, 9 November 2011
I want to revise the design of my I2C-based LCD board. The old design works, but it's not reliable and it puts a lot of traffic on the I2C bus. I'd prefer something that created less traffic and placed fewer demands on the host.
I2C is a great protocol for connecting sensors to micro-computers, but you can also use it to link two computers together.
The solution that I'm looking at will use an Atmel 8-bit AVR processor to drive the LCD and act as an IC2 slave. That way the host computer needs to send just two bytes down the wire to write a character. The current design sends three or four times that many.
I'm going to develop the solution in small steps:
Once that's done the rest should be fairly straightforward.
I2C is a great protocol for connecting sensors to micro-computers, but you can also use it to link two computers together.
The solution that I'm looking at will use an Atmel 8-bit AVR processor to drive the LCD and act as an IC2 slave. That way the host computer needs to send just two bytes down the wire to write a character. The current design sends three or four times that many.
I'm going to develop the solution in small steps:
Get the BeagleBoard talking to an Arduino over I2C with BB as master and Arduino as slave.
Connect the Arduino to the LCD.
Add LCD driver code to the Arduino sketch, and control it using I2C
Put an ATMega8 in a DT104 board from Dontronics. I have several of each spare.
Program an arduino bootloader into the ATMega8.
Upload the I2C-LCD sketch to the ATMega8.
Build a simple stripboard base for the LCD and the motherboard and connect them up.
Voilà!
Step one is done.
There's a handy I2C slave sketch among the Arduino examples, and a good tutorial showing how to connect two Arduinos on the main website. I've programmed the Arduino slave to sit on I2C address 4.
It's safe to connect the Arduino and the BB because tincantools' Trainer-xM board shifts the Beagle's 1.8v I2C signals to the 5v required by the Arduino and vice versa.
I found a useful blog post about using the Beagleboard I2c tools to drive another kind of device; it's easy to adapt the given command to talk to the Arduino. When I type
There's a handy I2C slave sketch among the Arduino examples, and a good tutorial showing how to connect two Arduinos on the main website. I've programmed the Arduino slave to sit on I2C address 4.
It's safe to connect the Arduino and the BB because tincantools' Trainer-xM board shifts the Beagle's 1.8v I2C signals to the 5v required by the Arduino and vice versa.
I found a useful blog post about using the Beagleboard I2c tools to drive another kind of device; it's easy to adapt the given command to talk to the Arduino. When I type
i2cwrite 2 4 67on the Beagleboard, it sends a byte value of 67 on bus 2 to the device with I2C address 4 and the character 'C' (ascii decimal 67) is received by the Arduino.
Connecting the Arduino and the LCD
very clear instructions from Oomlout for the LCD that they supply.
Once that's done the rest should be fairly straightforward.
Yesterday was frustrating but instructive. I started testing the radio link to TrackBot. At a minimum I want to create a dead man's handle to prevent the robot from careering into a wall. Some control over direction and speed would also be useful.
I spent hours trying to get the arrangement working with Trackbot. I could detect when a character was being transmitted, but it wasn't being sent correctly. Usually this is a baud rate, frame length or parity issue, but not this time.
I checked and rechecked code and comms settings, but to no avail.
Time to try the Bus Pirate.
The Bus Pirate is an open-source troubleshooting tool which you can use to snoop or send messages in various protocols - I2C, SPI and Serial comms among others. I got mine from Proto-Pic along with its cables.
I linked it up to my receiver and confirmed thad data was not being received as expected. Each character was reported as having a framing error. I checked the transmitter where I saw the same problem.
Eventually I gave up, packed up for the evening, and had supper. Later that evening I had a flash of inspiration. What if the signal was
The Bus Pirate has an option to invert a serial comms signal; as soon as I turned it on, the framing errors disappeared and the Bus Pirate correctly reported the characters I was sending. A quick Google search, and I remembered what I had found out years ago: RS32 comms is active-low, but TTL comms is active-high. In other words, a 0-bit going down an RS232 cable is represented by +12v, and a 1-bit by -12V. TTL is not inverted; a TTL 0-bit is 0v but a 1-bit is +5V. Since one end of the link uses RS232 and the other uses TTL, every character gets garbled.
Last time round I fixed this by software. This time round I can simplify the transmitter circuitry and use an FTDI USB to TTL comms cable for the transmitter. But not today - I am waiting for a replacement soldering iron stand. So it's time to do some more I2C development on the BeagleBoard.
I spent hours trying to get the arrangement working with Trackbot. I could detect when a character was being transmitted, but it wasn't being sent correctly. Usually this is a baud rate, frame length or parity issue, but not this time.
I checked and rechecked code and comms settings, but to no avail.
Time to try the Bus Pirate.
The Bus Pirate is an open-source troubleshooting tool which you can use to snoop or send messages in various protocols - I2C, SPI and Serial comms among others. I got mine from Proto-Pic along with its cables.
I linked it up to my receiver and confirmed thad data was not being received as expected. Each character was reported as having a framing error. I checked the transmitter where I saw the same problem.
Eventually I gave up, packed up for the evening, and had supper. Later that evening I had a flash of inspiration. What if the signal was
inverted?
The Bus Pirate has an option to invert a serial comms signal; as soon as I turned it on, the framing errors disappeared and the Bus Pirate correctly reported the characters I was sending. A quick Google search, and I remembered what I had found out years ago: RS32 comms is active-low, but TTL comms is active-high. In other words, a 0-bit going down an RS232 cable is represented by +12v, and a 1-bit by -12V. TTL is not inverted; a TTL 0-bit is 0v but a 1-bit is +5V. Since one end of the link uses RS232 and the other uses TTL, every character gets garbled.
Last time round I fixed this by software. This time round I can simplify the transmitter circuitry and use an FTDI USB to TTL comms cable for the transmitter. But not today - I am waiting for a replacement soldering iron stand. So it's time to do some more I2C development on the BeagleBoard.
Tuesday, 8 November 2011
The first iteration is complete; TrackBot now moves under program control.
It's running a simple test program which turns left, turns right, then moves forward and back, stops, turns a led on and then repeats the cycle. More ambitious behaviour can wait until I have a proximity sensor working; strip-board is fragile and I don't want TrackBot to break itself by running into furniture or a wall.
TrackBot is built from a Pololu RP5 base (now sadly discontinued by Pololu, but still available from Active Robots).
The base is controlled by a Solarbotics L298 compact motor driver; that in turn is controlled by an Arduino Pro Mini 328 (5v/16MHz).
The next iteration will be to link up radio control. I've already mounted an old 433.9 MHz radio receiver, but it's not yet connected to the Arduino.
After that I'll add proximity sensing (using IR and Ultrasonics), and a compass. That's probably about as far as I want to go with the Arduino Pro; the Arduino platform is great for quick, simple prototyping but not so well suited to complex, concurrent processing.
The whole project is a first step towards B4 - the BeagleBoard based 'Bot. That will be based on the BeagleBoard xM or the recently-announced BeagleBone. It will probably contain an AVR as well, running some form of RTOS. |
KIR should be west of the International date line from 1995, i.e. 2 Jan in the test. 'RRULE UNTIL values must be specified in UTC when DTSTART is timezone-aware', but here the UNTIL value DTSTART is tz-naive.
Was hoping to get a bit of feedback on designing a rrule. I'm wondering what the best way to create a minutely rule is but omit entire days. So for example get 5 minute intervals every weekday between some time but omit certain holidays. I thought of using something like
INTERVAL = 5
BUCKETS = int(60 / INTERVAL * 24)
WEEKDAYS = (MO, TU, WE, TH, FR)
nyd = rrule(MONTHLY, dtstart=datetime(2019, 1, 1, 0, 0), bymonth=1, byminute=range(0, 59, INTERVAL), byhour=range(0,24), bysetpos=range(1, BUCKETS + 1), byweekday=WEEKDAYS, count=300)
which works, but I'm wondering if this is some non recommended hackery. The reason I ask is because bysetpos only supports values up until 366. For minutely and secondly data you could have situations where you want to use bysetpos up to 1440 and 86400 respectively, so I'm wondering if bysetpos is not intended for this use?
rruleset with an exrule.
INTERVAL = 5
BUCKETS = int(60 / INTERVAL * 24)
WEEKDAYS = (MO, TU, WE, TH, FR)
rrule_base = rrule(MINUTELY, dtstart=datetime(2019, 1, 1, 0, 0), count=300, byweekday=WEEKDAYS)
rrset = rruleset()
rrset.rrule(rrule_base)
for dt in holidays:
rrset.exrule(rrule_base.replace(dtstart=dt, until=(dt + timedelta(days=1)), count=None))
for dt in holidays:
for inst in rrule_base.between(dt, dt + timedelta(days=1)):
rrset.exdate(inst)
|
last updated: 2020-09-04 (created 2017-08-16)
Some Smartys were updated by the DSO and deliver now more data (ex. power per phase) which is a good thing :). The old software will not work any longer, so update the software please.
!!New Software!!Based on Strings and using ESPBacker Lib (more infos about this library coming soon under microcontroller). Can also cope with bigger data streams (> 1024 Byte).
!!!New!!! Use the SmartyReader with an ESP32. Look at the end of the page.
New software and alternative software now on github:
The software (smartyreader.ino) was enhanced (thanks to Bob (fisch.lu)).Sam Grimee helps to improve the software of this project and created a repo on github with an alternative soft: https://github.com/sgrimee/smarty-reader (Thanks to Sam :)).
Basic construction kit::
If your interested in a bare PCB (5€) or a basic construction kit (SMD already soldered, Wemos, Jumper and cable included (see picture below) 30€), send a mail.
An 1000 µF Elko is missing on the picture with the kits.
Because of the EU Energy Efficiency Directive from 2012 the gas and electricity Distribution System Operators (DSO) in Luxembourg replaced there gas and energy meters with smartmeters (named smarty :(). Besides gas and electricity metering, the system is open for other metering data like water and district heat (M-Bus).
The French group Sagemcom delivered the smartmeters. All meters have to be read by one national central system, operated by a common operator. This is an economic group of interest (G.I.E.) of the 7 Luxembourgian gas and electricity DSO‘s named Luxmetering G.I.E.
Luxmetering is getting the data from 4 registers for active, reactive, import and export energy (1/4h) and the 3 registers for gas, water & heat (1 h) over Power Line Communication (PLC). The smartmeters have also alarms and logs for quality of electrical energy supply (voltage, outages,...) and fraud detection, and calendar functions for the 2 external relays (home applications).
The customer wants to get his data and this is possible by reading the blinking LED of the smartmeter. This can be done with the IoT-board. Another possibility is the 10 second data from the smartmeter P1 port (RJ12 connector under the green lid). The P1 data output communication protocol and format is specified in the Dutch Smart Meter Requirements v5.0.2 . The solution deployed in Luxembourg includes an additional security layer standard that is conform to the IDIS package 2.0 requirement. The encryption layer is based on DLMS security suite 0 algorithm: AES128-GCM. More information can be found in this document.
The P1 port connector is a 6 pole RJ12.
LOW initiates the data communication!
More details in the Dutch Smart Meter Requirements.
The statement:
For backward compatibility reason, no OSM is allowed to set “Data Request” line low (set it to GND or 0V).
is not relevant for the Luxembourgish smartmeter, because an optocoupler diode gets the signal.
As stated the communication on P1 port is encrypted with AES128-GCM (Galois Counter Mode). Each meter has its own 16 byte encryption key. Ask your DSO or Luxmetering for your key.With the key we need the cypher text, 17 byte Additional Authenticated Data (AAD), a 12 byte Initialization Vector (IV) and a 12 byte GCM Tag.The AAD is fix: 0x3000112233445566778899AABBCCDDEEFF. The other data is extracted from the serial stream.
The Initialization Vector (12 byte) consists of the system title (8 byte) and the frame counter (4 byte). The GCM Tag is found at the end of the stream.
More information on AES128-GCM can be found on http://weigu.lu/tutorials/sensors2bus/04_encryption/index.html.
The MQTT-protocol is a publisher/subscriber protocol and it is quite simple to implement this protocol on microcontrollers like the LOLIN D1 mini Pro board (ESP8266). The smartmeter data is published by the LOLIN board over Wifi. It is necessary to run a message server (broker) to distribute the data. I use for this mosquitto on a raspberry pi. The LOLIN board publishes the data and the same raspberry pi with the broker or another computer subscribes to the data and generates p. ex. a graphic.
For testing and debugging you can use the cool MQTT.FX software. It's a JavaFX based MQTT Client based on Eclipse Paho an very comfortable for testing purpose. Downloads on http://www.mqttfx.org.
An alternative software is mqtt-spy.
More information on MQTT can be found on http://weigu.lu/tutorials/sensors2bus/06_mqtt/index.html.
BOM (basic kit)
1 220 SMD 0805 reichelt.de: RND 0805 1 220
1 1k Ω SMD 0805 reichelt.de: RND 0805 1 1,0K
1 10k Ω SMD 0805 reichelt.de: RND 0805 1 10K
1 15k Ω SMD 0805 reichelt.de: RND 0805 1 15K
1 100 nF SMD 0805 reichelt.de: X7R 0805 CF 100N
1 1000 µF/6,3 V ELKO reichelt.de: RAD LXZ 6,3/1K0
2 2N7002 SOT-23 reichelt.de : 2N 7002 SMD
1 LOLIN D1 mini pro www.wemos.cc
1 RJ12 Jack reichelt.de: MEBP 6-6S
2 socket 1x8 straight reichelt.de: MPE 115-1-008
1 pin header reichelt.de: MPE 087-1-002
1 jumper reichelt.de: JUMPER 2,54GL SW
1 PCB www.weigu.lu
1 Western cable, 2x connectors, 6-pin reichelt.de: WK 6-6 2,5M
The LOLIN (Wemos) is sending the data over WiFi. If your metal control cabinet shields to much the signal, it is possible to connect an external antenna to the Lolin D1 mini pro by changing a 0 Ω resistor (look here).
If your Wifi is not reliable and you have the possibility to use ethernet, an W5100 Funduino ethernet board can be added. The PCB is also prepared to use an RTC with DS3231 and an LOLIN µSD card shield to log the data.
In the second version of the board I omitted the 100 µF capacitors, because the boards all worked fine without the capacitor. My new ammeter currentRanger from LowPowerLab helped to exactly measure the currents and I saw that while WiFi is used, there are many peaks drawing a current from 400 mA up to 800 mA! depending of the board. Even with same boards there could be big differences. An external antenna reduced the current, so I think that not all antennas are well matched. As the peaks are very short, the internal power supply of the smarty (specified up to 250 mA) has no problem to deliver the current, but I guess it is better to add a capacitor to the board. With a 1000 µF capacitor, the spikes come down to 300 mA. The ESP32 and also the new (green) Lolin have sometimes problems without that capacitor, so add it to the circuit! In the middle of the picture (time axes) the capacitor was added (100 mV correspond to 100 mA)!
If you think the hardware is not working correctly, you can test it with a voltmeter. Without LOLIN (Wemos) board (jumper must be connected), the enable pin gets 5V and data is sent every 10 seconds.
You can see a change in voltage (3.1 V to 2.5 V) every 10 s (jumper connected, no board) measuring between pin 2 (GND) and pin 7 (RX) on the board header (second and 7th pin on the right side). On one of my voltmeter I don't see it in the displayed numbers but on the bar graph that reacts quicker. This can of course be observed much better with an oscilloscope, where the serial data line changes between 3 V and GND for about 70 ms.
Another possibility is to connect a TTL/USB adapter (RX to RX pin 7 (which is TX from the Smartymeter) and GND to GND, jumper set, no board), and check the data stream with a terminal program (e.g. cleverterm) with 115200 bit/s (8 data bit, no parity and 1 stop bit).
If you don't see a change there is a possibility that the hardware is not working. To be sure you can measure directly on the P1 cable. Connect pin 2 (Enable) to pin 1 (5 V) and measure between pin 5 (TX) and pin 6 (GND). As the signal is inverted the Voltmeter shows 0 V and reacts every 10 s. To be totally sure use an oscilloscope.
If you get no signal on the cable, test another cable. If there is still no signal, the Smartymeter does not send a signal. Ask your DSO for support.
First install the newest Arduino IDE (1.8.8). To use our ESP8266 LOLIN/WEMOS we add this line http://arduino.esp8266.com/stable/package_esp8266com_index.json to File-Preferences-Additional_Boards_Manager_URLs:.
To install the manager go to Tools > Board: > Boards Manager..., select the Manager and click install. No chose under Tools > Board: (you have to scroll) LOLIN/WEMOS D1 mini Pro.
We need a Crypto library to decode the AES128-GCM and a MQTT library to publish our data. Go to Tools > Manage Libraries.... Type in the Search field Crypto. Click on Crypto by Rhys Wheatherley and install it. Search for mqtt pubsub and click on PubSubClient by Nick O'Leary. Install the library.
In our Arduino sketch we have 4 "switches" to comment or uncomment.+ SECURE is recommended to get a secure MQTT connection with your MQTT server. You need to define a user and a password.+ DEBUG outputs messages on 'Serial1(D4 on LOLIN/WEMOS, SD3 on MHEtLive) and is only needed when changing the software.+STATICgets you a fixed IP address in your network. Provide IP and gateway IP.+ESP32MK` let's you use the MHEtLive ESP32MiniKit instead of LOLIN/WEMOS D1 mini pro (look at the text on the end of this page.).
Further you have to provide a Wifi SSID and password, the mqttserver IP, the mqtttopic and your key for the smartmeter.
// Comment or uncomment the following lines suiting your needs
#define SECURE // if you want a secure MQTT connection (recommended!!)
//#define DEBUG // if debugging requested
//#define STATIC // if static IP needed (no DHCP)
//#define ESP32MK // if MH ET LIVE ESP32MiniKit instead of LOLIN D1 mini pro
...
// wifi and network settings
const char *ssid = "mywifi";
const char *password = "mypass";
#ifdef STATIC
IPAddress network_static_IP (192,168,178,14); //static IP
IPAddress network_subnet_mask (255,255,255,0);
IPAddress network_gateway (192,168,178,1);
#endif // ifdef STATIC
// MQTT settings
#ifdef SECURE
const short mqttPort = 8883; // TLS=8883
const char *mqtt_user = "me";
const char *mqtt_pass = "myMqttPass12!";
WiFiClientSecure espClient;
#else
const short mqttPort = 1883; // clear text = 1883
WiFiClient espClient;
#endif
PubSubClient client(espClient);
const char *mqtt_server = "192.168.178.160";
const char *mqtt_client_Id = "smarty_lam1_p1";
const char *mqtt_topic = "lamsmarty";
const char *smartyreader_hostname = "SmartyReader";
...
//Keys for SAG10307000xxxxx
byte key_SM1[] = {0x3B, 0x9C, 0xDB, 0x8C, 0xE3, 0xFD, 0xB7, 0x02,
0x16, 0x35, 0xFF, 0x6F, 0xB0, 0x2E, 0xE1, 0xDF};
The code is in the Downloads section at the end of the page.
To program the board, you have to take it out of the socket (the transistor on RXD prevents proper programming).
The software allows debugging and output of the data over serial1 on Pin D4. For more info see: http://weigu.lu/microcontroller/tips_tricks/esp8266_tips_tricks
A Python (python3) script is used to get our smartmeter MQTT data from the broker. We use the paho.mqtt.client library which can be installed with pip to subscribe to our topic.
sudo pip3 install paho-mqtt
The data is saved in a file (/data) and the old day-files are archieved (/data_archive). The script also generates a png-file with gnuplot that is displayed on an internal homepage and sent to an email address (full code in Downloads).
To to so we can use the same raspberry pi witch holds the broker.
You have to adjust possibly the broker IP address and your smartmeter id (these are the last 3 digits of your smarty id (ex: "345" for SAG1030700012345) ).
#!/usr/bin/python3
#
# Name: smartyreader.py
# Purpose: Client to get MQTT data from Mosquitto
# Author: weigu.lu
# Date: 8/17
#
...
import paho.mqtt.client as mqtt
...
clientID = "getsmarty_p1"
brokerIP = "192.168.178.101"
brokerPort = 1883
topic = "basement/smarty1"
sm_id = "345"
# Callback that is executed when the client receives a CONNACK response from the server.
def onConnect(client, userdata, flags, rc):
print("Connected with result code " + str(rc))
mqttc.subscribe(topic, 0) # Subscribe to the topics (topic name, QoS)
# Callback that is executed when we disconnect from the broker.
def onDisconnect(client, userdata, message):
print("Disconnected from the broker.")
# Callback that is executed when subscribing to a topic
def onSubscribe(client, userdata, mid, granted_qos):
print('Subscribed on topic.')
# Callback that is executed when unsubscribing to a topic
def onUnsubscribe(client, userdata, mid, granted_qos):
print('Unsubscribed on topic.')
# Callback that is executed when a message is received.
def onMessage(client, userdata, message):
# Subscribing in on_connect() means that if we lose the connection and
# reconnect then subscriptions will be renewed.
global sm, smp, sme, sm_mn #p power, e energy, mn at midnight
now = datetime.now()
now_time = now.time()
if now_time >= time(23,59,00) and now_time <= time(23,59,59):
sm_mn=ioj["c1"].rstrip("*kWh") # change to "p1" for production
ftime = strftime("%Y_%m_%d", localtime())
ftime3 = strftime("%d.%m.%y %H:%M:%S", localtime())
io=message.payload.decode("utf-8");
try:
ioj=json.loads(io)
except:
ioj={"dt":"error"}
print(ioj)
temp = ioj["dt"]
if (temp[0]!='e') and (temp[0]!='c'):
sm_new=ioj["c1"].rstrip("*kWh") # change to "p1" for production
if sm!="0":
smp = str(round((float(sm_new)-float(sm))*60000.0,3))
sm=sm_new
sme=str(float(sm)-float(sm_mn))
if sme[0]=='-':
sme="0"
try:
f = open (sm_p1_datafile1, 'r')
except IOError:
print("error reading file "+sm_p1_datafile1)
lineList = f.readlines() #read all lines
f.close()
try:
f = open (sm_p1_datafile1, 'a')
except IOError:
print ("Cannot create or find file: " + sm_p1_datafile1)
try:
f2 = open (sm_p1_datafile2+ftime+'.min', 'a')
except IOError:
print ("Cannot create or find file: " + sm_p1_datafile2)
if (len(lineList)) == 1:
sm_p1_data = ' '.join((ftime3,sm,sme,smp))
sm_p1_data = sm_p1_data + '\n'
else:
line = lineList[len(lineList)-1] #get the last line
lline =shlex.split(line) #convert string (space seperated items) to list
sm_p1_data = ' '.join((ftime3,sm,sme,smp))
sm_p1_data = sm_p1_data + '\n'
print (sm_p1_data,end='')
f.write(sm_p1_data)
f2.write(sm_p1_data)
f.close()
f2.close()
else:
print("loop not executed (error or connect message)")
...
# Main
mqttc = mqtt.Client(client_id=clientID, clean_session=True) # create client
mqttc.on_connect = onConnect # define the callback functions
mqttc.on_disconnect = onDisconnect
mqttc.on_subscribe = onSubscribe
mqttc.on_unsubscribe = onUnsubscribe
mqttc.on_message = onMessage
mqttc.connect(brokerIP, brokerPort, keepalive=60, bind_address="") # connect to broker
mqttc.loop_start() # start loop to process callbacks! (new thread!)
sm ="0"
smp = "0"
sme = "0"
sm_mn = "0"
try:
while True:
now = datetime.now()
now_time = now.time()
...
except KeyboardInterrupt:
print("Keyboard interrupt by user")
mqttc.loop_stop() # clean up
mqttc.disconnect()
To access the raspberry pi we will set a static IP address.Use the editor nano to append the following to the file /etc/dhcpcd.conf.
# Custom static IP address for eth0.
interface eth0
static ip_address=192.168.1.67
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8
# Custom static IP address for wlan0.
interface wlan0
static ip_address=192.168.1.69
static routers=192.168.1.1
static domain_name_servers=192.168.1.1 8.8.8.8
```bash cd /etc sudo nano dhcpcd.conf
Save with `CTRL+O` and exit with `CTRL+X`.
##### Setting up the webserver Lighttpd on the raspi
Lighttpd is an efficient high performance web server. It has a small memory footprint and an effective management of the cpu-load compared to other web-servers. Naturally you can use another web server, but possibly you have to adjust the path to your web page.
```bash
sudo apt update
sudo apt upgrade
sudo apt install lighttpd
Test if the web server is running by typing the ip address of your raspi in the url field of your browser.
The html files are in in /var/www/html. Copy the following html code (filename: index.html) to /var/www/html:
<!DOCTYPE html>
<head>
<title>Smarty P1</title>
</head>
<body>
<h1>Smarty Data</h1>
<p><img src="png/sm_p1_daily.png" alt="smarty data"></p>
</body>
</html>
Also create an empty directory named /png in /var/www/html.
sudo mkdir /var/www/html/png
sudo apt install gnuplot
To test gnuplot you can use the following command:
cd /home/pi/smarty/gp
gnuplot sm_p1.gp
The sm_p1.gp is created by our Python script from a template file. This template file is found in /smarty/gp (it is contained in the file smartyreader.zip). Here is the code that generates the gp file:
def sm_create_gp_file():
""" The function prepares the gp file for plotting with gnuplot. First the
old gp file is deleted. Then it uses the xx_gp_template.gp file in
~/../gp and replaces the keywords between the % sign by creating
a new gp (xx.gp) file."""
ftime2 = strftime("%d.%m.%y", localtime())
Title = ftime2
XFormat = '"%H:%M"'
XTics = "60*60" #seconds
Begin = ftime2 +" 00:00:01"
End = ftime2 +" 23:59:59"
Output = png_dir + "sm_p1_" + ftime + ".png"
Input = sm_p1_datafile1
try:
os.remove(sm_p1_gnupfile2)
except OSError:
pass
try:
gf1 = open (sm_p1_gnupfile1,'r')
except IOError:
print ("Cannot find file: " + sm_p1_gnupfile1)
try:
gf2 = open (sm_p1_gnupfile2,'a')
except IOError:
print ("Cannot find file: " + sm_p1_gnupfile2)
gline1 = gf1.readline()
while gline1 != "":
if "%TITLE%" in gline1:
gline1 = gline1.replace("%TITLE%",Title)
if "%XFORMAT%" in gline1:
gline1 = gline1.replace("%XFORMAT%",XFormat)
if "%XTICS%" in gline1:
gline1 = gline1.replace("%XTICS%",XTics)
if "%BEGIN%" in gline1:
gline1 = gline1.replace("%BEGIN%",Begin)
if "%END%" in gline1:
gline1 = gline1.replace("%END%",End)
if "%OUTPUT%" in gline1:
gline1 = gline1.replace("%OUTPUT%",Output)
if "%INPUT%" in gline1:
gline1 = gline1.replace("%INPUT%",Input)
gf2.write(gline1)
gline1 = gf1.readline()
gf1.close()
gf2.close()
Here the result with gnuplot:
First you have to install ssmtp:
sudo apt-get install ssmtp # needed
sudo apt-get install mailutils # not mandatory
sudo apt-get install mpack # for attachments
With your editor, set up the defaults for SSMTP in /etc/ssmtp/ssmtp.conf. Edit the fields:
root=my@mail.adr
mailhub=smtp.xxx.xx:587
hostname=localhost
rewriteDomain=www.xxx.com
FromLineOverride=YES
AuthUser=youruserid
AuthPass=xxxxxxxxxxxx
UseSTARTTLS=YES
Test your mail with:
echo "Hello world email body" | mail -s "Test Subject" my@mail.adr
The Python script will send the daily graphic per mail at 1 pm in the morning.
If you want to start the Python script automatically at reboot, add the following line to your /etc/crontab file.
@reboot root python3 /home/pi/smarty/smartyreader.py >> /home/pi/smarty/smartyreader_log.txt 2>&1
The output of the Python script is redirected to a text-file, for debugging. To log the cron jobs uncomment cron in the file /etc/rsyslog.conf. You will find the log file in /var/log/cron.log.Here is a helpful link if you have trouble with your cron job.
We have problems with ESP8266 WIFI in school! The ESP32 has no problems, so here is a SmartyReader with ESP32.
The MH ET LIVE ESP32MiniKit board is almost pin compatible with the LOLIN (wemos) D1 mini pro. But I had problems to get it work, because there is an error in the pinout sheet on internet. RxD ant TxD are interchanged and not compatible with D1 mini pro!Fortunately ESP32 has multiplexing features, and so pins can be changed in code. This can be done with the begin command SR_Serial.begin(115200,SERIAL_8N1, 1, 3). With this command we define GPIO pin 1 for RxD1 and pin 3 for TxD1.
A second change is the capacitor (1000µF/10V, even better 4700µF/10V) that was soldered to the board to the 5 V header. The ESP32 draws a higher current using WiFi for short time slices.
If you want to debug the code, Arduino Serial1 is on SD3 (u1TxD, GPIO10).
Here are the changes in the code:
// uncomment if MH ET LIVE ESP32MiniKit instead of LOLIN/WEMOS D1 mini pro
#define ESP32MK
...
#ifdef ESP32MK
#include <WiFi.h> // ESP32 MH ET LIVE ESP32MiniKit
#else
#include <ESP8266WiFi.h> // ESP8266 LOLIN/WEMOS D1 mini pro
#endif // ifdef ESP32MK
...
#ifdef ESP32MK
const byte DATA_REQUEST_SM = 17; //active Low! 17 for MH ET LIVE ESP32MiniKit
#else
const byte DATA_REQUEST_SM = D3; //active Low! D3 for LOLIN/WEMOS D1 mini pro
#endif // ifdef ESP32MK
void setup() {
...
#ifdef ESP32MK
SR_Serial.begin(115200,SERIAL_8N1, 1, 3); // change reversed pins of ESP32
#else
SR_Serial.begin(115200); // Hardware serial connected to smarty
#endif //ESP32MK
...
}
void setup_wifi() {
...
WiFi.begin(ssid, password);
#ifdef ESP32
WiFi.setHostname(smartyreader_hostname);
#else
WiFi.hostname(smartyreader_hostname);
#endif // ifdef ESP32MK
...
}
A new file to download will follow in the next days. |
CNN的语言模型
学习资料:
本节的全部代码
代码依赖的 utils.py 和 visual.py 在这里找到
我制作的 自然语言句子理解 短片简介
给句子分类的 CNN 语言模型:Convolutional Neural Networks for Sentence Classification
怎么了 ¶
一想到用深度学习解决语言问题,我们自然而然的就能想到使用循环神经网络RNN这一系列的模型。 而像CNN这种专注于图像处理的模型在语言领域也能胜任吗?答案是可以的。 我们在这个短片简介中了解到一句话放在深度学习中, 它其实也就是一段句向量。不管用什么样的模型,只要能将文字有效地转化成向量,那么这就会是一个好模型。
而这次,我们就尝试使用一种CNN模型,把文字描述转化成向量表达。用一句话来概括这个CNN语言模型,我想可以这样说:用N个不同长度时间窗口,以CNN的卷积方法在句子中依次滑动,让模型拥有N种阅读的眼界宽度,综合N种宽度的信息总结出这句话的内容。
怎么卷积 ¶
上次我们提到了Encoder Decoder的概念, 这次的CNN语言模型重视的是怎么样使用CNN当做文字内容提取的Encoder。
CNN最擅长的事就是卷积,但是相比图像中的卷积,在句子中的卷积起到的作用是特殊的,学者想利用CNN去利用不同长度的卷积核去观察句子中不同长度的局部特征。 然后CNN对句子的理解就是不同长度的局部特征拼凑起来的理解。
比如:
卷积核A两个两个字一起看;
卷积核B三个三个字一起看;
卷积核C四个四个字一起看;
卷积核ABC利用自己看句子的独特视角,能够提炼出对句子不同的理解,然后如果再汇集这些不同理解,就有了一个对句子更加全面的理解。
翻译 ¶
在这节内容中,我还是以翻译为例。有了上次Seq2Seq 的经验,我们知道在翻译的模型中,实际上是要构建一个Encoder,一个Decoder。 这节CNN做文字翻译的内容中,我们更关注的是用CNN的方法来做Encoder,让计算机读懂句子,至于Decoder,我们还是使用Seq2Seq当中的RNN Decoder来实现。
秀代码 ¶
我使用一个非常简单,好训练的日期转换的例子来展示一下CNN的语言理解能力。需要实现的功能如下:
# 中文的 "年-月-日" -> "day/month/year"
"98-02-26" -> "26/Feb/1998"
我们将中文的顺序日期,转成英文的逆序日期,数据区间是20世纪后期到21世纪前期。为了施加一些难度,在中文模式下,我不会告诉机器这是哪一个世纪,需要计算机自己去判断转成英文的时候是 20 世纪还是 21 世纪。
先来看训练过程(只想看全套代码的请点这里), 其实也很简单,生成数据,建立模型,训练模型。
def train():
# 我已经帮大家封装了日期生成器代码
data = utils.DateData(4000)
# 建立模型
model = CNNTranslation(...)
# training
for t in range(1500):
bx, by, decoder_len = data.sample(32)
loss = model.step(bx, by, decoder_len)
最后你能看到它的整个训练过程。最开始预测成渣渣,但是后面预测结果会好很多。不过最后这个CNN的模型可能是应为参数量还不够大的关系, 预测并不是特别准确,不过将就能用~
t: 0 | loss: 3.293 | input: 96-06-17 | target: 17/Jun/1996 | inference: /////1///99
t: 70 | loss: 1.110 | input: 91-08-19 | target: 19/Aug/1991 | inference: 03/Feb/2013<EOS>
t: 140 | loss: 0.972 | input: 11-04-30 | target: 30/Apr/2011 | inference: 10/Sep/2001<EOS>
t: 210 | loss: 0.828 | input: 76-03-14 | target: 14/Mar/1976 | inference: 16/May/1977<EOS>
...
t: 1400 | loss: 0.183 | input: 86-10-14 | target: 14/Oct/1986 | inference: 14/Oct/1986<EOS>
t: 1470 | loss: 0.151 | input: 18-02-08 | target: 08/Feb/2018 | inference: 05/Feb/2018<EOS>
这节内容最重要的代码内容就在下方,我们动手搭建一下它的Encoder部分。为本节的例子,我们使用3个Conv2D的卷积层,这三个对不同长度的局部信息做卷积,所以他们的结构都不一样,然后再用MaxPool2D去将他们归一化到同一dimension。这样就可以将最后的所有局部信息汇总,加工成句向量了。
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tensorflow_addons as tfa
class CNNTranslation(keras.Model):
def __init__(self, ...):
super().__init__()
# encoder
self.enc_embeddings = keras.layers.Embedding(
input_dim=enc_v_dim, output_dim=emb_dim, # [enc_n_vocab, emb_dim]
embeddings_initializer=tf.initializers.RandomNormal(0., 0.1),
)
self.conv2ds = [
keras.layers.Conv2D(16, (n, emb_dim), padding="valid", activation=keras.activations.relu)
for n in range(2, 5)]
self.max_pools = [keras.layers.MaxPool2D((n, 1)) for n in [7, 6, 5]]
self.encoder = keras.layers.Dense(units, activation=keras.activations.relu)
...
def encode(self, x):
embedded = self.enc_embeddings(x) # [n, step, emb]
o = tf.expand_dims(embedded, axis=3) # [n, step=8, emb=16, 1]
co = [conv2d(o) for conv2d in self.conv2ds] # [n, 7, 1, 16], [n, 6, 1, 16], [n, 5, 1, 16]
co = [self.max_pools[i](co[i]) for i in range(len(co))] # [n, 1, 1, 16] * 3
co = [tf.squeeze(c, axis=[1, 2]) for c in co] # [n, 16] * 3
o = tf.concat(co, axis=1) # [n, 16*3]
h = self.encoder(o) # [n, units]
return [h, h]
接下来的Decoder部分就和Seq2Seq中一模一样了,decoder在训练是和句子生成时是不同的。为了方便训练,尤其是在刚开始训练时,decoder的输入如果是True label,那么就能大大减轻训练难度。不管在训练时有没有预测错,下一步在decoder的输入都是正确的。
class Seq2Seq(keras.Model):
def __init__(self, ...):
...
# decoder
self.dec_embeddings = keras.layers.Embedding() # [dec_n_vocab, emb_dim]
self.decoder_cell = keras.layers.LSTMCell(units=units)
decoder_dense = keras.layers.Dense(dec_v_dim)
# 训练时的 decoder
self.decoder_train = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.TrainingSampler(), # sampler for train
output_layer=decoder_dense
)
self.cross_entropy = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
self.opt = keras.optimizers.Adam(0.01)
def train_logits(self, x, y, seq_len):
s = self.encode(x)
dec_in = y[:, :-1] # ignore <EOS>
dec_emb_in = self.dec_embeddings(dec_in)
o, _, _ = self.decoder_train(dec_emb_in, s, sequence_length=seq_len)
logits = o.rnn_output
return logits
def step(self, x, y, seq_len):
with tf.GradientTape() as tape:
logits = self.train_logits(x, y, seq_len)
dec_out = y[:, 1:] # ignore <GO>
loss = self.cross_entropy(dec_out, logits)
grads = tape.gradient(loss, self.trainable_variables)
self.opt.apply_gradients(zip(grads, self.trainable_variables))
return loss.numpy()
而在生产环境中预测时,真的在做翻译时,我们就希望有另一种decode的sample方式。使decoder下一步的预测基于decoder上一步的预测,而不是true label。
class Seq2Seq(keras.Model):
def __init__(self):
...
# predict decoder
self.decoder_eval = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(), # sampler for predict
output_layer=decoder_dense
)
...
def inference(self, x):
s = self.encode(x)
done, i, s = self.decoder_eval.initialize(
self.dec_embeddings.variables[0],
start_tokens=tf.fill([x.shape[0], ], self.start_token),
end_token=self.end_token,
initial_state=s,
)
pred_id = np.zeros((x.shape[0], self.max_pred_len), dtype=np.int32)
for l in range(self.max_pred_len):
o, s, i, done = self.decoder_eval.step(
time=l, inputs=i, state=s, training=False)
pred_id[:, l] = o.sample_id
return pred_id
所以在seq2seq中,为了加快训练速度,我们一般使用的training和inference的decode方式是有所不同的。inference的时候,那没办法,只能拿着上次decode的词作为下一步的input, 因为没有label可以参考。但是在training时却可以拿着label过来加强训练的有效性。
局限性 ¶
不知道你有没有思考过,CNN做句向量encoding的时候有一个局限性,它要求有个句子最长的限制,句子如果超过这个长度,那么就最好截断它。 因为就像在给图像做卷积,图像也是要定长定宽的,不然卷积和池化会有尺度上的问题。这是一个相比RNN的硬伤。之后我们在介绍Transformer类型的语言模型时, 也会介绍到这个硬伤。
总结 ¶
这一节内容我们见识到CNN同样也能够做句向量的encoding,但是在decoder的时候,我们还是延续了之前的RNN decoder模式。现在我们已经了解了RNN和CNN两种模式的语言模型, 这放在10年前,算是最先进的技术了,但是到了现在,学者们玩起了一种叫Attention的东西,下节内容我们就来介绍Attention,注意力在语言模型中的重要地位。 |
作者:George Seif
x = lambda a, b : a * b
print(x(5, 6)) # prints '30'
x = lambda a : a*3 + 3
print(x(3)) # prints '12'
def square_it_func(a):
return a * a
x = map(square_it_func, [1, 4, 7])
print(x) # prints '[1, 16, 49]'
def multiplier_func(a, b):
return a * b
x = map(multiplier_func, [1, 4, 7], [2, 5, 8])
print(x) # prints '[2, 20, 56]'看看上面的示例!我們可以將函式應用於單個或多個串列。實際上,你可以使用任何 Python 函式作為 map 函式的輸入,只要它與你正在操作的序列元素是兼容的。
# Our numbers
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
# Function that filters out all numbers which are odd
def filter_odd_numbers(num):
if num % 2 == 0:
return True
else:
return False
filtered_numbers = filter(filter_odd_numbers, numbers)
print(filtered_numbers)
# filtered_numbers = [2, 4, 6, 8, 10, 12, 14]
from itertools import *
# Easy joining of two lists into a list of tuples
for i in izip([1, 2, 3], ['a', 'b', 'c']):
print i
# ('a', 1)
# ('b', 2)
# ('c', 3)
# The count() function returns an interator that
# produces consecutive integers, forever. This
# one is great for adding indices next to your list
# elements for readability and convenience
for i in izip(count(1), ['Bob', 'Emily', 'Joe']):
print i
# (1, 'Bob')
# (2, 'Emily')
# (3, 'Joe')
# The dropwhile() function returns an iterator that returns
# all the elements of the input which come after a certain
# condition becomes false for the first time.
def check_for_drop(x):
print 'Checking: ', x
return (x > 5)
for i in dropwhile(should_drop, [2, 4, 6, 8, 10, 12]):
print 'Result: ', i
# Checking: 2
# Checking: 4
# Result: 6
# Result: 8
# Result: 10
# Result: 12
# The groupby() function is great for retrieving bunches
# of iterator elements which are the same or have similar
# properties
a = sorted([1, 2, 1, 3, 2, 1, 2, 3, 4, 5])
for key, value in groupby(a):
print(key, value), end=' ')
# (1, [1, 1, 1])
# (2, [2, 2, 2])
# (3, [3, 3])
# (4, [4])
# (5, [5])
# (1) Using a for loopv
numbers = list()
for i in range(1000):
numbers.append(i+1)
total = sum(numbers)
# (2) Using a generator
def generate_numbers(n):
num, numbers = 1, []
while num numbers.append(num)
num += 1
return numbers
total = sum(generate_numbers(1000))
# (3) range() vs xrange()
total = sum(range(1000 + 1))
total = sum(xrange(1000 + 1))
|
Composing signature with solid brush background
Here are the steps to add Text signature into document with GroupDocs.Signature:
Create new instance of Signature class and pass source document path or stream as a constructor parameter.
Instantiate the SignOptions object with all required additional options and setup Background.setBrush property with instance of SolidBrush
Call sign method of Signature class instance and pass SignOptions to it.
Analyze SignResult result to check newly created signatures if needed.
Signature signature = new Signature("sample.pdf"); TextSignOptions options = new TextSignOptions("John Smith"); // adjust signature appearance brush // setup background Background background = new Background(); background.setColor(Color.GREEN); background.setTransparency(0.5); background.setBrush(new SolidBrush(Color.LIGHT_GRAY)); options.setBackground(background); // locate signature options.setWidth(100); options.setHeight(80); options.setVerticalAlignment(VerticalAlignment.Center); options.setHorizontalAlignment(HorizontalAlignment.Center); Padding padding = new Padding(); padding.setTop(20); padding.setRight(20); options.setMargin(padding); // set alternative signature implementation on document page options.setSignatureImplementation(TextSignatureImplementation.Image); // sign document to file SignResult signResult = signature.sign("signed.pdf", options); // analyzing result System.out.print("List of newly created signatures:"); int number = 1; for(BaseSignature temp : signResult.getSucceeded()) { System.out.print("Signature #"+ number++ +": Type: "+temp.getSignatureType()+" Id:"+temp.getSignatureId()+ ",Location: "+temp.getLeft()+"x"+temp.getTop()+". Size: "+temp.getWidth()+"x"+temp.getHeight()); }
More resources
GitHub Examples
You may easily run the code above and see the feature in action in our GitHub examples:
GroupDocs.Signature for .NET examples, plugins, and showcase
GroupDocs.Signature for Java examples, plugins, and showcase
Document Signature for .NET MVC UI Example
Document Signature for .NET App WebForms UI Example
Document Signature for Java App Dropwizard UI Example
Document Signature for Java Spring UI Example
Free Online App
Along with full-featured .NET library we provide simple, but powerful free Apps.
You are welcome to eSign PDF, Word, Excel, PowerPoint documents with free to use online GroupDocs Signature App. |
kervin:~ jeanmi$ diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS Sans titre 499.9 GB disk0s2
3: Apple_HFS Clone 499.8 GB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *320.1 GB disk1
1: EFI EFI 209.7 MB disk1s1
2: Apple_HFS Macintosh HD 319.2 GB disk1s2
3: Apple_Boot Recovery HD 650.0 MB disk1s3
kervin:~ jeanmi$ diskutil info disk1s2
Could not find disk: disk1s2
kervin:~ jeanmi$ diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS Sans titre 499.9 GB disk0s2
3: Apple_HFS Clone 499.8 GB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: *0 B disk1
kervin:~ jeanmi$ diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS Sans titre 499.9 GB disk0s2
3: Apple_HFS Clone 499.8 GB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: *320.1 GB disk1
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: Apple_partition_scheme *18.0 MB disk2
1: Apple_partition_map 32.3 KB disk2s1
2: Apple_HFS Flash Player 18.0 MB disk2s2
diskutil eraseDisk jhfs+ "Macintosh HD" gpt disk1 kervin:~ jeanmi$ diskutil eraseDisk jhfs+ "Macintosh HD" gpt disk1Started erase on disk1Unmounting diskError: -69760: Unable to write to the last block of the device Error: -69760: Unable to write to the last block of the device
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.1 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS OS X Base System 499.2 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +5.2 MB disk1
/dev/disk2 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk2
/dev/disk3 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk3
/dev/disk4 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk4
/dev/disk5 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk5
/dev/disk6 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk6
/dev/disk7 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk7
/dev/disk8 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +12.6 MB disk8
/dev/disk9 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +4.2 MB disk9
/dev/disk10 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +1.0 MB disk10
/dev/disk11 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk11
/dev/disk12 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk12
/dev/disk13 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk13
/dev/disk14 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +1.0 MB disk14
/dev/disk15 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +6.3 MB disk15
/dev/disk16 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +6.3 MB disk16
/dev/disk17 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk17
/dev/disk18 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk18
2: Apple_HFS OS X Base System 499.2 GB disk0s2 diskutil eraseVolume jhfs+ "Macintosh HD" disk0s2
-bash-3.2# diskutil eraseVolume jhfs+ "Macintosh HD" disk0s2
Started erase on disk0s2 OS X Base System
Unmounting disk
Erasing
Initialized /dev/rdisk0s2 as a 465 GB case-insensitive HFS Plus volume with a 40960k journal
Mounting disk
Finished erase on disk0s2 Macintosh HD
-bash-3.2# diskutil list
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.1 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_HFS Macintosh HD 499.2 GB disk0s2
/dev/disk1 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme +2.1 GB disk1
1: Apple_HFS OS X Base System 2.0 GB disk1s1
/dev/disk2 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +5.2 MB disk2
/dev/disk3 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk3
/dev/disk4 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk4
/dev/disk5 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk5
/dev/disk6 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk6
/dev/disk7 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk7
/dev/disk8 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk8
/dev/disk9 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +12.6 MB disk9
/dev/disk10 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +4.2 MB disk10
/dev/disk11 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +1.0 MB disk11
/dev/disk12 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk12
/dev/disk13 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk13
/dev/disk14 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk14
/dev/disk15 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +1.0 MB disk15
/dev/disk16 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +6.3 MB disk16
/dev/disk17 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +6.3 MB disk17
/dev/disk18 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +524.3 KB disk18
/dev/disk19 (disk image):
#: TYPE NAME SIZE IDENTIFIER
0: untitled +2.1 MB disk19
-bash-3.2#
|
blob: e5a6a92c5a9418ab62e604ad79edc9f1ecc8436f (
plain
)
#!/usr/bin/python3
# Ramdomize the lines from stdin.
#
# Uses a stronger source of entropy, so you don't end up with repeated output
# (or at least repeated starts# so often as with sort -R or
# perl -MList::Util # -e 'print List::Util::shuffle <>'
#
# Copyright (c) 2015 Peter Palfrader <peter@palfrader.org>
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import argparse
import random
import sys
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('-s', '--sample', metavar='LINES', type=int, nargs='?', const=1, help='Print only this many lines (1).')
args = parser.parse_args()
lines = sys.stdin.readlines()
r = random.SystemRandom()
if args.sample is None:
r.shuffle(lines)
else:
lines = r.sample(lines, args.sample)
for l in lines:
print(l, end='')
|
You can use the Facebook pixel to track your website visitors' actions. This is called conversion tracking. Tracked conversions appear in the Facebook Ads Manager and the Facebook Analytics dashboard, where they can be used to analyze the effectiveness of your conversion funnel and to calculate your return on ad investment. You can also use tracked conversions to define custom audiences for ad optimization and dynamic ad campaigns. Once you have defined custom audiences, we can use them to identify other Facebook users who are likely to convert and target them with your ads.
There are three ways to track conversions with the pixel:
The pixel's base code must already be installed on every page where you want to track conversions.
Standard events are predefined visitor actions that correspond to common, conversion-related activities, such as searching for a product, viewing a product, or purchasing a product. Standard events support parameters, which allow you to include an object containing additional information about an event, such as product IDs, categories, and the number of products purchased.
All standard events are tracked by calling the pixel's fbq('track') function, with the event name, and (optionally) a JSON object as its parameters. For example, here's a function call to track when a visitor has completed a purchase event, with currency and value included as a parameter:
fbq('track', 'Purchase', {currency: "USD", value: 30.00});
If you called that function, it would be tracked as a purchase event in the Events Manager:
You can call the fbq('track') function anywhere between your web page's opening and closing <body> tags, either when the page loads, or when a visitor completes an action, such as clicking a button.
For example, if you wanted to track a standard purchase event after a visitor has completed the purchase, you could call the fbq('track') function on your purchase confirmation page, like this:
<body> ... <script> fbq('track', 'Purchase', {currency: "USD", value: 30.00}); </script> ... </body>
If instead you wanted to track a standard purchase event when the visitor clicks a purchase button, you could tie the fbq('track') function call to the purchase button on your checkout page, like this:
<button id="addToCartButton">Purchase</button> <script type="text/javascript"> $('#$addToCartButton').click(function() { fbq('track', 'Purchase', {currency: "USD", value: 30.00}); }); </script>
Note that the example above uses jQuery to trigger the function call, but you could trigger the function call using any method you wish.
If our predefined standard events aren't suitable for your needs, you can track your own custom events, which also can be used to define custom audiences for ad optimization. Custom events also support parameters, which you can include to provide additional information about each custom event.
You can track custom events by calling the pixel's fbq('trackCustom') function, with your custom event name and (optionally) a JSON object as its parameters. Just like standard events, you can call the fbq('trackCustom') function anywhere between your webpage's opening and closing <body> tags, either when your page loads, or when a visitor performs an action like clicking a button.
For example, let's say you wanted to track visitors who share a promotion in order to get a discount. You could track them using a custom event like this:
fbq('trackCustom', 'ShareDiscount', {promotion: 'share_discount_10%'});
Custom event names must be strings, and cannot exceed 50 characters in length.
Each time the pixel loads, it automatically calls fbq('track', 'PageView') to track a PageView standard event. PageView standard events record the referrer URL of the page that triggered the function call. You can use these recorded URLs in the Events Manager to define visitor actions that should be tracked.
For example, let's say that you send visitors who subscribe to your mailing list to a thank you page. You could set up a custom conversion that tracks website visitors who have viewed any page that has /thank-you in its URL. Assuming your thank you page is the only page with /thank-you in its URL, and you've installed the pixel on that page, anyone who views it will be tracked using that custom conversion.
Once tracked, custom conversions can be used to optimize your ad campaigns, to define custom audiences, and to further refine custom audiences that rely on standard or custom events.
Since custom conversions rely on complete or partial URLs, you should make sure that you can define visitor actions exclusively based on unique strings in your website URLs.
Custom conversions are created entirely within the Events Manager. Refer to our Advertiser Help document to learn how.
Optimize for actions and track them without adding anything to your Facebook pixel base code. You can do this beyond the 9 standard events.
/{AD_ACCOUNT_ID}/customconversions. pixel_rule. For example, thankyou.html is a page appearing after purchase.
This records a PURCHASE conversion when 'thankyou.html' displays:
use FacebookAds\Object\CustomConversion;
use FacebookAds\Object\Fields\CustomConversionFields;
$custom_conversion = new CustomConversion(null, 'act_<AD_ACCOUNT_ID>');
$custom_conversion->setData(array(
CustomConversionFields::NAME => 'Example Custom conversion',
CustomConversionFields::PIXEL_ID => <PIXEL_ID>,
CustomConversionFields::PIXEL_RULE => array(
'url' => array('i_contains' => 'thank-you.html'),
),
CustomConversionFields::CUSTOM_EVENT_TYPE => 'PURCHASE',
));
$custom_conversion->create();
from facebookads.adobjects.customconversion import CustomConversion
custom_conversion = CustomConversion(parent_id='act_<AD_ACCOUNT_ID>')
custom_conversion.update({
CustomConversion.Field.name: 'Example Custom Conversion',
CustomConversion.Field.pixel_id: <PIXEL_ID>,
CustomConversion.Field.pixel_rule: {
'url': {'i_contains': 'thankyou.html'},
},
CustomConversion.Field.custom_event_type: 'PURCHASE',
})
custom_conversion.remote_create()
curl \
-F 'name=Example Custom conversion' \
-F 'pixel_id=<PIXEL_ID>' \
-F 'pixel_rule={"url":{"i_contains":"thank-you.html"}}' \
-F 'custom_event_type=PURCHASE' \
-F 'access_token=<ACCESS_TOKEN>' \
https://graph.facebook.com/v2.8/act_<AD_ACCOUNT_ID>/customconversions
You can then create your campaign using the CONVERSIONS objective.
At the ad set level, specify the same custom conversion (pixel_id, pixel_rule, custom_event_type) in promoted_object.
Ads Insights returns information about Custom Conversions:
curl -i -G \ -d 'fields=actions,action_values' \ -d 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/v2.7/<AD_ID>/insights
Returns both standard and custom conversions:
{ "data": [ { "actions": [ { "action_type": "offsite_conversion.custom.17067367629523", "value": 1225 }, { "action_type": "offsite_conversion.fb_pixel_purchase", "value": 205 } ], "action_values": [ { "action_type": "offsite_conversion.custom.1706736762929507", "value": 29390.89 }, { "action_type": "offsite_conversion.fb_pixel_purchase", "value": 29390.89 } ], "date_start": "2016-07-28", "date_stop": "2016-08-26" } ], "paging": { "cursors": { "before": "MAZDZD", "after": "MjQZD" }, "next": "https://graph.facebook.com/v2.7/<AD_ID>/insights?access_token=<ACCESS_TOKEN>&pretty=0&fields=actions%2Caction_values&date_preset=last_30_days&level=adset&limit=25&after=MjQZD" } }
Custom conversions have unique IDs; query it for a specific conversion, such as a rule-based one:
curl -i -G \ -d 'fields=name,pixel,pixel_aggregation_rule' \ -d 'access_token=<ACCESS_TOKEN>' \ https://graph.facebook.com/v2.7/<CUSTOM_CONVERSION_ID>
The maximum number of custom conversions per ad account is 40. If you use Ads Insights API to get metrics on custom conversions:
Parameters are optional, JSON-formatted objects that you can include when tracking standard and custom events. They allow you to provide additional information about your website visitors' actions. Once tracked, parameters can be used to further define any custom audiences you create.
To include a parameter object with a standard or custom event, format your parameter data as an object using JSON, then include it as the third function parameter when calling the fbq('track') or fbq('trackCustom') functions.
For example, let's say you wanted to track a visitor who purchased multiple products as a result of your promotion. You could do this:
fbq('track', 'Purchase', // begin parameter object data { value: 115.00, currency: 'USD', contents: [ { id: '301', quantity: 1 }, { id: '401', quantity: 2 }], content_type: 'product' } // end parameter object data );
Note that if you want to use data included in event parameters when defining custom audiences, key values must not contain any spaces.
You can include the following predefined object properties with any custom events and any standard events that support them. Format your parameter object data using JSON.
Property Key Value Type Parameter Description
string
Category of the page or product.
array of integers or strings
Product IDs associated with the event, such as SKUs. Example:
string
Name of the page/product.
string
Can be
array of objects
Array of JSON objects that contains the International Article Number (EAN) when applicable or other product or content identifier(s) associated with the event, and quantities and prices of the products.
Example:
string
Currency for the
string
Category of the delivery. Supported values:
integer
Number of items when checkout was initiated. Used with the
integer, float
Predicted lifetime value of a subscriber as defined by the advertiser and expressed as an exact value.
string
String entered by the user for the search. Used with the
Boolean
Used with the
integer or float
Value of a user performing this event to the business.
If our predefined object properties don't suit your needs, you can include your own, custom properties. Custom properties can be used with both standard and custom events, and can help you further define custom audiences.
For example, let's say you wanted to track a visitor who purchased multiple products after having first compared them to other products. You could do this:
fbq('track', 'Purchase', // begin parameter object data { value: 115.00, currency: 'USD', contents: [ { id: '301', quantity: 1 }, { id: '401', quantity: 2 }], content_type: 'product', compared_product: 'recommended-banner-shoes', // custom property delivery_category: 'in_store' } // end parameter object data );
Now that you're tracking conversions, we recommend that you use them to define custom audiences, so you can optimize your ads for website conversions. |
Text Language Identification is the process of predicting the language of a given a piece of text. You might have encountered it when Chrome shows a popup to translate a webpage when it detects that the content is not in English. Behind the scenes, Chrome is using a model to predict the language of text used on a webpage.
When working with a dataset for NLP, the corpus may contain a mixed set of languages. Here, language identification can be useful to either filter out a few languages or to translate the corpus to a single language and then use for your downstream tasks.
In this post, I will explain the working mechanism and usage of the fasttext language detection library.
Fasttext is an open-source library in Python for word embeddings and text classification. It is built for production use case rather than research and hence is optimized for performance and size. It extends the Word2Vec model with ideas such as using subword information and model compression.
For our purpose of language identification, we can use the pre-trained fasttext language identification models. The model was trained on a dataset drawn from Wikipedia, Tatoeba, and SETimes. The basic idea is to prepare a training data of (text, language) pairs and then train a classifier on it.
The benchmark below shows that these pre-trained language detection models are better than langid.py, another popular python language detection library. Fasttext has better accuracy and also the inference time is very fast. It supports a wide variety of languages including French, German, English, Spanish, Chinese.
Install the Fasttextlibrary using pip.
pip install fasttext
There are two versions of the pre-trained models. Choose the model which fits your memory and space requirements:
Download the pre-trained model from Fasttext to some location. You'll need to specify this location later in the code. In our example, we download it to the /tmp directory.
wget -O /tmp/lid.176.bin https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin
Now, we import fasttext and then load the model from the pretrained path we downloaded earlier.
import fasttext PRETRAINED_MODEL_PATH = '/tmp/lid.176.bin' model = fasttext.load_model(PRETRAINED_MODEL_PATH)
Let's take an example sentence in French which means 'I eat food'. To detect language with fasttext, just pass a list of sentences to the predict function. The sentences should be in the UTF-8 format.
sentences = ['je mange de la nourriture'] predictions = model.predict(sentences) print(predictions) # ([['__label__fr']], array([[0.96568173]]))
The model returns back two tuples back. One of them is an array of language labels and the other is the confidence for each sentence. Here
fris theISO 639code for French. The model is 96.56% confident that the language is French.
Fasttext returns the ISO code for the most probable one among the 170 languages. You can refer to the page on ISO 639 codes to find language for each symbol.
af als am an ar arz as ast av az azb ba bar bcl be bg bh bn bo bpy br bs bxr ca cbk ce ceb ckb co cs cv cy da de diq dsb dty dv el eml en eo es et eu fa fi fr frr fy ga gd gl gn gom gu gv he hi hif hr hsb ht hu hy ia id ie ilo io is it ja jbo jv ka kk km kn ko krc ku kv kw ky la lb lez li lmo lo lrc lt lv mai mg mhr min mk ml mn mr mrj ms mt mwl my myv mzn nah nap nds ne new nl nn no oc or os pa pam pfl pl pms pnb ps pt qu rm ro ru rue sa sah sc scn sco sd sh si sk sl so sq sr su sv sw ta te tg th tk tl tr tt tyv ug uk ur uz vec vep vi vls vo wa war wuu xal xmf yi yo yue zh
To programmatically convert language symbols back to the language name, you can use pycountry package. Install the package using pip.
pip install pycountry
Now, pass the symbol to pycountry and you will get back the language name.
from pycountry import languages lang_name = languages.get(alpha_2='fr').name print(lang_name) # french
Thus, we saw how fasttext can be used for language detection in Python. This is very useful to filter out and deal with non-english responses in Natural Language Processing projects.
If you enjoyed this blog post, feel free to connect with me on Twitter where I share new blog posts every week. |
ÐдÑавÑÑвÑйÑе, иÑполÑзÑÑ Ð±Ð¸Ð±Ð»Ð¸Ð¾ÑÐµÐºÑ telebot, еÑÑÑ Ð½ÐµÑколÑко message handler и callback query handler, оÑделÑно ÑÑнкÑÐ¸Ñ Ð¾ÑпÑавки ÑообÑÐµÐ½Ð¸Ñ Ð¿Ð¾Ð»ÑзоваÑелÑ, еÑли он еÑÑÑ Ð² базе даннÑÑ
боÑа.
ÐÑи импоÑÑе дÑÑгого Ñайла, в коÑоÑом ÑолÑко одна ÑÑнкÑиÑ(ÑÑнкÑÐ¸Ñ Ð² лÑпе ÑеÑез twisted), Ð±Ð¾Ñ Ð½Ðµ ÑеагиÑÑÐµÑ Ð½Ð° ÑообÑениÑ(СкоÑее вÑего из-за лÑпа ÑÑнкÑии). Ðак пÑавилÑно из ÑÑой ÑÑнкÑии вÑзÑваÑÑ ÑÑнкÑÐ¸Ñ Ð¾ÑпÑавки ÑообÑÐµÐ½Ð¸Ñ Ð¿Ð¾Ð»ÑзоваÑÐµÐ»Ñ Ð¸Ð· оÑновного Ñайла и не пÑеÑÑваÑÑ ÑабоÑÑ Ñ
ендлеÑов?
Ðока доÑел ÑолÑко до мÑлÑÑипÑоÑеÑÑинга, но как-Ñо кÑиво Ñ Ð²ÑÑ Ð·Ð°ÐºÑÑÑÐ¸Ð²Ð°Ñ Ð² клаÑÑÑ Ð¸ в иÑоге ниÑего не ÑабоÑаеÑ.
ÐоÑоÑкий пÑимеÑ:
spoiler
import mysql.connector
import telebot
import urllib3
@bot.message_handler(commands=['start'])
def send_welcome(message):
bot.send_message(message.chat.id,
"Привет.")
def send_to_user(abcp_id, number, comment, id_from): # вызываетÑÑ Ð¸Ð· отдельного файла
number_html = f'<a href="https://{domain}{method}={number}">{number}</a>'
text_to_user = number_html + comment
sql_check = f"SELECT * FROM to_notify WHERE number ='{number}'"
ex_check = cursor.execute(sql_check)
print(cursor.execute(sql_check))
print(type(ex_check))
result_check = cursor.fetchall()
print(result_check)
sql = f"SELECT * FROM staff WHERE id ='{_id}'"
cursor.execute(sql)
my_result = cursor.fetchall()
if __name__ == '__main__': ...
|
ÐÑ Ð½Ðµ можеÑе ÑÑо ÑделаÑÑ, Ñ.к. ÑÑо дейÑÑвие ÑÑабаÑÑÐ²Ð°ÐµÑ Ð´Ð¾ вÑÐ¿Ð¾Ð»Ð½ÐµÐ½Ð¸Ñ ÑÑÑоÑки кода
ТÑÑ Ð±Ñл напиÑан комменÑаÑий, коÑоÑÑй Ñже Ñдалили)
Так Ð²Ð¾Ñ Ñам пÑедлагали Ñакое ÑеÑение:
try:
# Ñтрока Ñ Ð¾ÑˆÐ¸Ð±ÐºÐ¾Ð¹
except SyntaxError:
# дейÑтвие при ошибке
Так Ð²Ð¾Ñ Ñакое поведение нелÑÐ·Ñ Ð¾Ð±ÑабоÑаÑÑ, ÑÑкий пÑимеÑ:
try:
a + a = a
except SyntaxError:
print("ошибочка")
ÐÑзвав даннÑй код, Ð¼Ñ Ð¿Ð¾Ð»ÑÑим Syntax Error до обÑабоÑки, Ð²Ð¾Ñ Ð¸ вÑе дела)
ÐÑ Ð¼Ð¾Ð¶ÐµÑе его обÑабоÑаÑÑ Ñаким ÑпоÑобом:
try:
eval("a + a = a")
except SyntaxError:
print ("ошибочка")
Ðо Ñамом по Ñебе пÑименение eval - bad practices
upd: вÑзов ÑакиÑ
ÑкÑпеÑенов пÑоиÑÑ
Ð¾Ð´Ð¸Ñ Ð¿Ñи пеÑвонаÑалÑном анализе кода, когда ни о какиÑ
try/except и ÑеÑи не Ð¸Ð´ÐµÑ |
Key Takeaways
A synthetic dataset is one that resembles the real dataset, which is made possible by learning the statistical properties of the real dataset.
Synthetic data can help to solve the common problem of data scarcity and protects data privacy, making it easier to share data and improve model robustness. This is particularly beneficial for financial institutions.
To generate synthetic samples, different algorithms should be applied to different types of data. CTGAN is a great open source project at MIT that provides desirable results for generating synthetic tabular data.
We explore the power of synthetic data generation through the application of the CTGAN on a payment dataset and learn how to evaluate synthetic data samples.
Data is the lifeblood of artificial intelligence. Without sufficient data, we are unable to train models and then our powerful and expensive hardware sits idle. Data contains the information from which we want models to draw patterns, extract insights, generate predictions, build smarter products, and develop into more intelligent models.
However, data is typically difficult to procure and oftentimes the data collection process can be more arduous and time-consuming than building the actual machine learning models.
There is a science to collecting good, high-quality, and clean data, as it can be a time-intensive and expensive process. In some cases, data is highly regulated, meaning long lead times to secure the permissions to access it. Even when secured, the size of a dataset might be so limited that training models is out of the question. To address this challenge, we need synthetic data.
Synthetic data is data that is artificially generated rather than collected by real-world events. It is data that serves the purpose of resembling a real dataset but is entirely fake in nature. Data has a distribution, a shape that defines the way it looks. Picture a dataset in a tabular format.
We have all these different columns and there are hidden interactions between the columns, as well as inherent correlations and patterns. If we can build a model to understand the way the data looks, interacts, and behaves, then we can query it and generate millions of additional synthetic records that look, act, and feel like the real thing.
Now, synthetic data isn’t a magical process. We can’t start with just a few poor-quality data points and expect to have a miraculous high-quality synthetic dataset from our model. Just like the old saying goes, "garbage in, garbage out," in order to create high-quality synthetic data, we need to start with a dataset that is both high-quality and plentiful in size. With this, it is possible to expand our current dataset with high-quality synthetic data points.
In this article, I will discuss the benefits of using synthetic data, which types are most appropriate for different use cases, and explore its application in financial services.
Why is synthetic data useful?
If we already have a decent high-quality dataset, is there any point in trying to acquire additional fake data points? The answer should always be an emphatic, ‘Yes!’ And here’s why.
Say you have a dataset that has a very skewed balance on a column you are trying to predict, and you can’t obtain more data for this minority class. We can leverage synthetic data to synthesize more data points for the minority and add balance to our model to help seek a performance increase. For example, if the task is to predict if a piece of fruit is an apple or an orange by learning about the attributes of these two fruits - their color, shape, seasonality, etc., and there are 4,000 samples for apples while only 200 samples for oranges, then any machine learning algorithm is likely to be biased towards apples due to the large class imbalance. This could result in an inaccurate model and undesirable performance. However, if we can generate 3800 more synthetic samples for oranges, then the model won’t be biased toward either fruit and thus can make a more accurate prediction as there is more balance in the two classes.
Additionally, say you have a set of data that you wish to share. The caveat here is that the data is sensitive and in this case, data privacy is extremely important. Many datasets contain personally identifiable information (PII) or other sensitive attributes such as a person’s full name, social security number, bank account number, etc., making it difficult to share them with a third party in order to carry out any kind of data analysis or model building. This gets into the hassle of anonymizing data, picking and choosing non-personally identifiable information, sitting with legal teams, creating secure data transfer processes, and much more. This process can lead to months of delay in creating a solution, as the data needed for a model can’t be shared immediately. To combat this, we can leverage synthetic samples from the real dataset that still preserve the important characteristics of the real data that can be more easily shared without the risk of invading data privacy and leaking personal information.
Why might it be useful in financial services?
Financial services are at the top of the list when it comes to concerns around data privacy. The data is sensitive and highly regulated. In addition to improving machine learning model performance, it’s no surprise that the use of synthetic data has grown rapidly in the financial services field, as it allows institutions to more easily share their data.
It’s also difficult to obtain more financial data. For example, to get more customer checking account data to feed a model, we need more customers to open up checking accounts. Then we need to wait a length of time for them to start using the accounts and building up transaction histories. However, with synthetic data, we can look at our current customer base and synthesize new checking accounts with their associated usage, allowing us to use this data right away.
Different types of synthetic data
If you google synthetic data, you will find all different types of data mediums being synthesized. Most commonly, you will see unstructured data, such as synthetic paintings from image data, synthetic videos for advertisements, and synthetic audio for popular public figures. These are some really interesting data types to synthesize, but in financial services, just like many other industries, we commonly deal with databases and flat tabular files containing numerical, categorical, and text-based data points. Additionally, we have data ordered by time and data tables that are relational in nature.
It is important to note that there isn’t one perfect synthetic data generation algorithm that can handle any type of data. When looking into synthesizing your dataset you need to look at the characteristics and understand which algorithm is right for your data.
Popular methods for generating synthetic data
So, if you google "synthetic data generation algorithms" you will probably see two common phrases: GANs and Variational Autoencoders. These are two classes of algorithms that have generative properties, i.e., the ability to create data. Heavy research and development have been done around these models and many synthetic data architectures, from images to audio to tabular data to text data, have been created using these core methodologies. Let’s briefly discuss these two architectures.
GANs, properly known as generative adversarial networks, are two neural networks, (namely the generator network and the discriminator network), that play a game against one another. The generator tries to generate fake or synthetic data while the discriminator network tries to determine if the data it is seeing is real or fake. As the two networks battle it out, the generator learns to create better and better fake data, which makes the task harder for the discriminator.
Variational autoencoders are neural networks whose goal is to predict their input. In traditional supervised machine learning tasks, we have an input and an output. With autoencoders, the goal is to use the input to predict and try to reconstruct it. Here, we have two parts to the network: the encoder and the decoder. The encoder compresses the input and creates a smaller version of it. The decoder takes this compressed input and tries to reconstruct the original input. The idea here is that we are learning how to represent the data by scaling it down in the encoder and building it back up from the decoder. If we can accurately rebuild the original input, then we can query the decoder to generate synthetic samples.
There are many machine learning algorithms for generating synthetic data out there, but which one performs the best all depends on the specific data types that you are working with. So, it would be smart to explore the data before making a choice.
How to evaluate synthetic data samples
Once you have a synthetic dataset, you need to ensure that it is of high quality. There are many synthetic data generation algorithms for different types of data, but how do we make sure that the generated, fake samples truly mimic the real data? I will now introduce some methods and tips on how to evaluate synthetic data. Since data exists in many different forms, we will be focusing on tabular data that is non-time series.
There are two core evaluation components in which to validate synthetic data: statistical similarity to the true dataset and its machine learning efficacy.
Statistical Similarity
As previously mentioned, data has a distribution. It has a look and feel. It has interactions with other data fields and behaves in its own respective manner. When we have a synthetic dataset and a real dataset, we want to make sure we have similar distributions. We want to make sure the column distribution looks the same. If we have data imbalances, we want to make sure our synthetic dataset captures these imbalances. Here, we want to plot side-by-side histograms, scatterplots, and cumulative sums of each column to ensure we have a similar look.
The next step is to look at correlations. If we have interactions between columns in our real dataset, then we should expect a properly generated synthetic dataset to have similar interactions. To do so, we can plot a correlation matrix of both the real and synthetic sets as well as a difference in correlation values between the two to get an idea of how similar or different the correlation matrices are.
Machine Learning Efficacy
If our dataset contains a target variable or column that we are interested in predicting and building a model from, we can dive into machine learning efficacy. This measures how well the synthetic data performs under different models. The idea here is that if we can build and train a model on the synthetic dataset, and it performs well upon evaluation on real data, then we have a good synthetic dataset. To do this, we look at classification metrics that are appropriate for the problem at hand, such as (but not limited to) F1 score and regression metrics, such as RMSE. The performance, represented by evaluation metrics on the regression/classification models, can then be averaged across these metrics, which will give us a final performance score on the machine learning efficacy of the synthetic data.
Synthetic data generation in finance
Choosing the right synthetic data generation algorithm depends greatly on the type of data we are dealing with. Since most of the datasets that we work with in the financial industry exist in tabular format, it would be preferable for us to use a machine learning model that is designed specifically for tabular data. Fortunately, there is an open-source project at MIT that developed exactly such an algorithm called CTGAN (Conditional GAN for Tabular Data). As previously discussed, GANs consist of two neural networks: a generator and a discriminator. The CTGAN is a spin-off of this methodology and takes generating data to a different level.
As data scientists, we often deal with tabular data with mixed data types, from numerical to categorical columns. With numerical columns, the distribution of values can become much more complex than an ideal Gaussian distribution. With categorical columns, the common problem is class imbalance, meaning there will be too many data points in some categories but not enough data points in the other categories. It is quite a challenge for traditional GAN models to successfully learn from these data points with non-Gaussian distributions or class imbalances. To produce highly realistic data of this nature we turn to the CTGAN. This model separates the numerical and categorical columns and uses alternate methods to learn the distributions. The Variational Gaussian Mixture Model can detect the modes of continuous columns, while the conditional generator and training-by-sampling will solve any prominent class imbalance problems. Then the two fully-connected layers in the network can efficiently learn the data distributions and the network will generate samples using mixed activation functions since there are both numerical and categorical values.
Figure 1: Diagram of a synthetic data generation model with CTGAN
Next, let’s see how we can use the CTGAN in a real-life example in the world of financial services.
To start, we import all the necessary libraries. The CTGAN model is built on top of PyTorch and the table_evaluator library is designed specifically for evaluating tabular data, which will be quite useful to see how our generated samples are performing.
import pandas as pd
import numpy as np
from dateutil import parser
import torch
from ctgan import CTGANSynthesizer
from table_evaluator import load_data, TableEvaluator
The dataset used in this example is the IBM Late Payment Histories dataset that is publicly available through Kaggle. For our example, we will be trying to predict if the payment will be late or not.
Let’s first read in the data and look at the first five rows. We will need to specify the path where our IBM_Late_Payment.csv file is located.
df = pd.read_csv('… /IBM_Late_Payment.csv')
df.head()
Figure 2: Original payment data samples
Now it’s time for data preprocessing. We want to make every column readable for the CTGAN model. We first map the ‘customerID’ column to a finite number of discrete integers, which will be called ‘CustomerIDMap’. Then, we convert all the columns that contain dates to a numerical representation that the CTGAN can effectively model.
def convert_dates(df,date_cols):
#Turn dates into epochs (seconds)
for i in date_cols:
df[i] = df[i].apply(lambda x: parser.parse(x).timestamp())
return df
customerID = df.customerID.unique().tolist()
customerID_map = dict(zip(customerID,range(len(customerID))))
df['CustomerIDMap'] = df['customerID'].apply(lambda i: customerID_map[i])
df.drop('customerID',axis=1,inplace=True)
df = convert_dates(df, ['PaperlessDate','InvoiceDate','DueDate','SettledDate'])
We also need to create a label column that dictates whether the payment is late or not. This will be the target column that we are trying to predict from the other feature columns.
df.loc[df['DaysLate'] > 0, 'IsLate'] = 'Yes'
df.loc[df['DaysLate'] <= 0, 'IsLate'] = 'No'
df.head()
Figure 3: Pre-processing the data points
We will be handling 2,466 data points, which is considered quite small for training a synthetic data generation algorithm. Usually, the more data points the better, but depending on the data quality, sometimes with fewer data points we can achieve the desired model performance.
df.shape()
Out[105]: (2466, 13)
Again, to make sure all the columns have the correct type, we will convert all categorical columns to string values so that the model doesn’t confuse these columns with the continuous ones.
df.dtypes
df['countryCode'] = df['countryCode'].astype(str)
df['CustomerIDMap'] = df['CustomerIDMap'].astype(str)
df['invoiceNumber'] = df['invoiceNumber'].astype(str)
Another thing worth noting is that the CTGAN model can’t handle categorical columns with high cardinalities. So, any column with a large number of unique identifiers or infinite discrete values will cause issues in training. Such columns will need to be removed from training. In our case, we will need to remove the ‘invoiceNumber’ column. Since there are already other features created from the date columns, we will also be removing those in this particular case.
discrete_cols = df.dtypes[(df.dtypes == 'object')].index.tolist()
for i in discrete_cols:
print(i, len(df[i].unique()))
discrete_cols.remove('invoiceNumber')
df_training = df[['countryCode','InvoiceAmount','Disputed','PaperlessBill','DaysToSettle','DaysLate','CustomerIDMap','IsLate']]
Now it’s time to get prepared for training. The CTGAN model is a neural network-based model that requires intense training sessions, thus GPU usage is recommended.
To start training, we create an instance of the CTGANSynthesizer class and fit it with our data, specifying discrete columns. We then run a long training session on our relatively small dataset and train for 750 epochs with a batch size of 100.
ctgan = CTGANSynthesizer(batch_size=100)
ctgan.fit(df_training, discrete_cols, epochs=750)
After training is done, we can generate as many data samples as we want. For comparison, we will sample the same size as the original training dataset. The model returns all data as strings, so we recast the data columns in the synthetic set to be the same as in the real set.
samples = ctgan.sample(df_training.shape[0])
tys = df_training.dtypes.tolist()
for idx,i in enumerate(df_training.columns.tolist()):
samples[i] = samples[i].astype(tys[idx])
Let’s take a look at the generated samples. They look very realistic, don’t they? But to see how they really performed we need to use the proper evaluation methods mentioned earlier.
samples.head()
Figure 4: Synthetic data samples generated by CTGAN
We create a TableEvaluator instance, passing in the real set and the synthetic samples, also specifying all discrete columns.
table_evaluator = TableEvaluator(df_training, samples, cat_cols=discrete_cols)
table_evaluator.visual_evaluation()
Looking at the cumulative sums and distribution plots as a way to compare statistical similarity, we can tell that the synthetic samples represent the real ones very well.
Figure 5: Cumulative sum for each feature (blue for real samples and orange for fake samples)
Figure 6: Distribution or histogram for each feature (blue for real samples and orange for fake samples)
Besides distribution visualizations, we will also evaluate our synthetic samples based on machine learning efficacy. We call the evaluate function from table_evaluator and pass in the target column. From here, table_evaluator will build models from the real and fake data and evaluate against each respectively. The numbers all look great, once again confirming that the model has done a great job.
table_evaluator.evaluate(target_col='IsLate')
Figure 7: Evaluation metrics of the synthetic samples
Last but not least, we compare the correlation matrix of the real data to that of the generated samples. If the correlation matrix of the fake data looks similar to that of the real data, then we have a good synthetic dataset as our synthetic data has similar interactions to the real data. For ours, they do look similar, which is great.
sns.heatmap(df_training.corr(), cmap='coolwarm', center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
Figure 8: Heatmap of the real data
sns.heatmap(samples.corr(), cmap='coolwarm', center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
Figure 9: Heatmap of the synthetic data
From the unique model structure itself to the great results from our example, we can see that CTGAN is a fantastic tool for learning and generating synthetic samples on tabular data. If you would like to learn more about it, please check out the original Github project and share your thoughts with us!
About the Author
Dawn Li is a data scientist at Finastra’s Innovation Lab, where she stays up-to-date with the latest advances and applications of machine learning and applies them to solve problems in financial services. Dawn holds degrees in applied mathematics and statistics from Georgia Institute of Technology. |
Recent Posts
Recent Comments
Today
13
Total
3,503
ì¼ ì í ì 목 ê¸ í
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
Tags
programmers
ë°ì´í°
ìê°í
backtracking
kocw
dfs
ì¼ì±ê¸°ì¶
OS
Dashboard
BFS
ëì ê³íë²
íë¸ë¡
ìê³ ë¦¬ì¦
BOJ
operation systems
ì½ë©í ì¤í¸
algirithm
íì´ì¬
ìµìë¹ì©
ì¤íë² ì¤
ì¤í°ë
ëìë³´ë
ê°ì
íë¡ê·¸ë머ì¤
python
Algorithm
ë°±ì¤
ì´ìì²´ì
tableau
Visualization
ê´ë¦¬ ë©ë´
ë°ì´í° ìì§ëì´
ë°±ì¤ - [Gold 4] ì íë²í¸ ëª©ë¡ ë³¸ë¬¸
íë¡ê·¸ëë°(Programming)/ìê³ ë¦¬ì¦(Algorithm)
ë°±ì¤ - [Gold 4] ì íë²í¸ 목ë¡
Data Engineer kingsmo 2020. 9. 21. 23:49
문ì ë§í¬: www.acmicpc.net/problem/5052
문ì ì¤ëª
- T: í ì¤í¸ ì¼ì´ì¤ ê°¯ì
- N: ì íë²í¸ ê°ì
- Nê°ì ì íë²í¸ê° 주ì´ì§
ê° í ì¤í¸ ì¼ì´ì¤ì ëí´ì, ì¼ê´ì± ìë 목ë¡ì¸ ê²½ì°ìë YES, ìë ê²½ì°ìë NO를 ì¶ë ¥íë 문ì ì ëë¤.
ì¼ê´ì±ì´ ìëê²ì ê° ë²í¸ê° ì´ë¤ ë²í¸ìë ì ëì´ë¡ ìíì§ ìë ê²½ì°ë¥¼ ë»í©ëë¤.
ex)
911, 911234, 1023 => NO - 911ì´ ê²¹ì¹ê¸° ë문ì ëë¤.
12340, 123440, 123450 => YES - ì ëì´ê° ëë ê²½ì°ê° ììµëë¤.
íì´
Trieë¡ íë ¤ë¤ê° ë ê°ë¨í ë°©ë²ì¼ë¡ í´ê²°í ì ìë 문ì ììµëë¤. (Trie ìë£êµ¬ì¡° 문ì ë ê³§ í ìì .... ã )
í´ë¹ 문ì ë ì ëì´ì¸ì§ë§ íì¸íë©´ ëë 문ì ë¡, ì ë ¬ í ë¤ì 문ììë§ ì ëì´ì¸ì§ íì¸í´ì£¼ë©´ ëë 문ì ì ëë¤.
ì°¸ê³ ë¡, ì ë ¬íì§ ìê³ 2ì°¨ì 루íë¡ ì ì²´ ê²ì¬íë©´ ìê°ì´ê³¼ê° ë¹ëë¤.
ì½ë
from sys import stdin
stdin = open("input.txt", "r")
# T: 테스트 ì¼€ì´ìФ 개수
T = int(stdin.readline())
for t in range(T):
# N: ì „í™”ë²ˆí˜¸ 갯수
N = int(stdin.readline())
# ìž…ë ¥ ë°›ê³ ì •ë ¬
phone_numbers = [stdin.readline().strip() for _ in range(N)]
phone_numbers.sort()
flag = 0
for i in range(len(phone_numbers) - 1):
# ì ‘ë‘ì–´ì¸ì§€ 확ì¸
if phone_numbers[i+1].startswith(phone_numbers[i]):
print("NO")
flag = 1
break
if flag == 0: print("YES")
'íë¡ê·¸ëë°(Programming) > ìê³ ë¦¬ì¦(Algorithm)' ì¹´í ê³ ë¦¬ì ë¤ë¥¸ ê¸
ë°±ì¤ - [Platinum 3] 5446ë² ì©ë ë¶ì¡± (0) 2020.09.24
ë°±ì¤ - [Silver 3] 2579ë² ê³ë¨ ì¤ë¥´ê¸° (0) 2020.09.22
ë°±ì¤ - [Gold 4] ì íë²í¸ ëª©ë¡ (0) 2020.09.21
ë°±ì¤ - [Gold 2] 12100ë² 2048(Easy) (0) 2020.09.20
ë°±ì¤ - [Platinum 5] 11003ë² ìµìê° ì°¾ê¸° (0) 2020.09.18
ë°±ì¤ - [Gold 4] 1918ë² íì í기ì (0) 2020.09.17
0 Comments |
JSON 类型几乎已成为互联网及物联网(IoT)的基础数据类型,具体协议请参见 JSON 官网。AnalyticDB for PostgreSQL数据库对JSON数据类型做了完善的支持。并且AnalyticDB for PostgreSQL 6.0版本支持JSONB类型。这部分介绍对JSON & JSONB数据类型的操作,包括:
JSON & JSONB的异同
JSON和JSONB类型在使用上几乎完全一致,两者的区别主要在存储上,json数据类型直接存储输入文本的完全的拷贝,JSONB数据类型以二进制格式进行存储。同时JSONB相较于JSON更高效,处理速度提升非常大,且支持索引,一般情况下,AnalyticDB for PostgreSQL 6.0版本都建议使用JSONB类型替代JSON类型
JSON输入输出语法
AnalyticDB for PostgreSQL支持的JSON数据类型的输入和输出语法详见RFC 7159。
一个JSON数值可以是一个简单值(数字、字符串、true/null/false),数组,对象。下列都是合法的json表达式:
-- 简单标量/简单值
-- 简单值可以是数字、带引号的字符串、true、false或者null
SELECT '5'::json;
-- 零个或者更多个元素的数组(元素类型可以不同)
SELECT '[1, 2, "foo", null]'::json;
-- 含有键/值对的对象
-- 注意对象的键必须总是带引号的字符串
SELECT '{"bar": "baz", "balance": 7.77, "active": false}'::json;
-- 数组和对象可以任意嵌套
SELECT '{"foo": [true, "bar"], "tags": {"a": 1, "b": null}}'::json;
以上的JSON类型都可以写成JSONB类型的表达式,例如:
-- 简单标量/简单值,转化为jsonb类型
SELECT '5'::jsonb;
JSON操作符
下表说明了可以用于JSON & JSONB数据类型的操作符。
操作符 右操作数类型 描述 例子 结果
-> int 获得JSON数组元素(索引从零开始)。 '[{"a":"foo"}, {"b":"bar"}, {"c":"baz"}]'::json->2 {"c":"baz"}
-> text 根据键获得JSON对象的域。 '{"a": {"b":"foo"}}'::json->'a' {"b":"foo"}
->> int 获得JSON数组元素的文本形式。 '[1,2,3]'::json->>2 3
->> text 获得JSON对象域的文本形式。 '{"a":1,"b":2}'::json->>'b' 2
#> text[] 获得在指定路径上的JSON对象。 '{"a": {"b":{"c": "foo"}}}'::json#>'{a,b}' {"c": "foo"}
#>> text[] 获得在指定路径上的JSON对象的文本形式。 '{"a":[1,2,3], "b":[4,5,6]}'::json#>>'{a,2}' 3
下表说明了可以用于JSONB数据类型的操作符。
JSONB操作符
下表说明了可以用于JSONB数据类型的操作符。
操作符 右操作数类型 描述 例子
= jsonb 两个JSON对象的内容是否相等 '[1,2]'::jsonb= '[1,2]'::jsonb
@> jsonb 左边的JSON对象是否包含右边的JSON对象 '{"a":1, "b":2}'::jsonb @> '{"b":2}'::jsonb
<@ jsonb 左边的JSON对象是否包含于右边的JSON对象 '{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb
? text 指定的字符串是否存在与JSON对象中的key或者字符串类型的元素中 '{"a":1, "b":2}'::jsonb ? 'b'
?| text[] 右值字符串数组是否存在任一元素在JSON对象字符串类型的key或者元素中 '{"a":1, "b":2, "c":3}'::jsonb ?| array['b', 'c']
?& text[] 右值字符串数组是否所有元素在JSON对象字符串类型的key或者元素中 '["a", "b"]'::jsonb ?& array['a', 'b']
JSON创建函数
下表说明了用于创建JSON值的函数。
函数 描述 例子 结果
to_json (anyelement) 返回该值作为一个合法的JSON对象。数组和组合会被递归处理并且转换成数组和对象。如果输入包含一个从该类型到JSON的造型,会使用该cast函数来执行转换,否则将会产生一个JSON标量值。对于任何非数字、布尔值或空值的标量类型,会使用其文本表示,并且加上适当的引号和转义让它变成一个合法的JSON字符串。 to_json ('Fred said "Hi."'::text) "Fred said \"Hi.\""
array_to_json (anyarray [, pretty_bool]) 返回该数组为一个JSON数组。一个多维数组会变成一个JSON数组的数组。 说明 如果pretty_bool为true,在第一维元素之间会增加换行。 array_to_json ('{{1,5},{99,100}}'::int[]) [[1,5],[99,100]]
row_to_json (record [, pretty_bool]) 返回该行为一个JSON对象。 说明 如果pretty_bool为true,在第一级别元素之间会增加换行。 row_to_json (row(1,'foo')) {"f1":1,"f2":"foo"}
JSON处理函数
下表说明了处理JSON值的函数。
函数 返回类型 描述 例子 例子结果
json_extract_path_text(from_json json, VARIADIC path_elems text[]) text 返回path_elems指定的JSON值为文本。等效于#>>操作符。 json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')
json_object_keys(json) setof text 返回最外层JSON对象中的键集合。 json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
json_populate_record(base anyelement, from_json json) anyelement 把Expands the object in from_json中的对象展开成一行,其中的列匹配由base定义的记录类型。 select * from json_populate_record(null::myrowtype, '{"a":1,"b":2}')
json_populate_recordset(base anyelement, from_json json) set of anyelement 将from_json中最外层的对象数组展开成一个行集合,其中的列匹配由base定义的记录类型。 select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]')
json_array_elements(json) set of json 将一个JSON数组展开成JSON值的一个集合。 select * from json_array_elements('[1,true, [2,false]]')
value
-----------
1
true
[2,false]
JSONB创建索引
JSONB类型支持GIN, BTree索引。一般情况下,我们会在JSONB类型字段上建GIN索引,语法如下:
CREATE INDEX idx_name ON table_name USING gin (idx_col);
CREATE INDEX idx_name ON table_name USING gin (idx_col jsonb_path_ops);
说明在JSONB上创建GIN索引的方式有两种:使用默认的jsonb_ops操作符创建和使用jsonb_path_ops操作符创建。两者的区别是:在jsonb_ops的GIN索引中,JSONB数据中的每个key和value都是作为一个单独的索引项的,而jsonb_path_ops则只为每个value创建一个索引项。
JSON操作举例
创建表
create table tj(id serial, ary int[], obj json, num integer);
=> insert into tj(ary, obj, num) values('{1,5}'::int[], '{"obj":1}', 5);
INSERT 0 1
=> select row_to_json(q) from (select id, ary, obj, num from tj) as q;
row_to_json
-------------------------------------------
{"f1":1,"f2":[1,5],"f3":{"obj":1},"f4":5}
(1 row)
=> insert into tj(ary, obj, num) values('{2,5}'::int[], '{"obj":2}', 5);
INSERT 0 1
=> select row_to_json(q) from (select id, ary, obj, num from tj) as q;
row_to_json
-------------------------------------------
{"f1":1,"f2":[1,5],"f3":{"obj":1},"f4":5}
{"f1":2,"f2":[2,5],"f3":{"obj":2},"f4":5}
(2 rows)
说明JSON 类型不能支持作为分布键来使用;也不支持 JSON 聚合函数。
多表 JOIN
create table tj2(id serial, ary int[], obj json, num integer);
=> insert into tj2(ary, obj, num) values('{2,5}'::int[], '{"obj":2}', 5);
INSERT 0 1
=> select * from tj, tj2 where tj.obj->>'obj' = tj2.obj->>'obj';
id | ary | obj | num | id | ary | obj | num
----+-------+-----------+-----+----+-------+-----------+-----
2 | {2,5} | {"obj":2} | 5 | 1 | {2,5} | {"obj":2} | 5
(1 row)
=> select * from tj, tj2 where json_object_field_text(tj.obj, 'obj') = json_object_field_text(tj2.obj, 'obj');
id | ary | obj | num | id | ary | obj | num
----+-------+-----------+-----+----+-------+-----------+-----
2 | {2,5} | {"obj":2} | 5 | 1 | {2,5} | {"obj":2} | 5
(1 row)
JSON 函数索引
CREATE TEMP TABLE test_json (
json_type text,
obj json
);
=> insert into test_json values('aa', '{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}');
INSERT 0 1
=> insert into test_json values('cc', '{"f7":{"f3":1},"f8":{"f5":99,"f6":"foo"}}');
INSERT 0 1
=> select obj->'f2' from test_json where json_type = 'aa';
?column?
----------
{"f3":1}
(1 row)
=> create index i on test_json (json_extract_path_text(obj, '{f4}'));
CREATE INDEX
=> select * from test_json where json_extract_path_text(obj, '{f4}') = '{"f5":99,"f6":"foo"}';
json_type | obj
-----------+-------------------------------------------
aa | {"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}
(1 row)
JSONB创建索引
-- 创建测试表并生成数据
CREATE TABLE jtest1 (
id int,
jdoc json
);
CREATE OR REPLACE FUNCTION random_string(INTEGER)
RETURNS TEXT AS
$BODY$
SELECT array_to_string(
ARRAY (
SELECT substring(
'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
FROM (ceil(random()*62))::int FOR 1
)
FROM generate_series(1, $1)
),
''
)
$BODY$
LANGUAGE sql VOLATILE;
insert into jtest1 select t.seq, ('{"a":{"a1":"a1a1", "a2":"a2a2"},
"name":"'||random_string(10)||'","b":"bbbbb"}')::json from
generate_series(1, 10000000) as t(seq);
CREATE TABLE jtest2 (
id int,
jdoc jsonb
);
CREATE TABLE jtest3 (
id int,
jdoc jsonb
);
insert into jtest2 select id, jdoc::jsonb from jtest1;
insert into jtest3 select id, jdoc::jsonb from jtest1;
-- 建立索引
CREATE INDEX idx_jtest2 ON jtest2 USING gin(jdoc);
CREATE INDEX idx_jtest3 ON jtest3 USING gin(jdoc jsonb_path_ops);
-- 未建索引执行
EXPLAIN ANALYZE SELECT * FROM jtest1 where jdoc @> '{"name":"N9WP5txmVu"}';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Gather Motion 2:1 (slice1; segments: 2) (cost=0.00..162065.73 rows=10100 width=88) (actual time=1343.248..1777.605 rows=1 loops=1)
-> Seq Scan on jtest2 (cost=0.00..162065.73 rows=5050 width=88) (actual time=0.042..1342.426 rows=1 loops=1)
Filter: (jdoc @> '{"name": "N9WP5txmVu"}'::jsonb)
Planning time: 0.172 ms
(slice0) Executor memory: 59K bytes.
(slice1) Executor memory: 91K bytes avg x 2 workers, 91K bytes max (seg0).
Memory used: 2047000kB
Optimizer: Postgres query optimizer
Execution time: 1778.234 ms
(9 rows)
-- 使用jsonb_ops操作符创建索引执行
EXPLAIN ANALYZE SELECT * FROM jtest2 where jdoc @> '{"name":"N9WP5txmVu"}';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Gather Motion 2:1 (slice1; segments: 2) (cost=88.27..13517.81 rows=10100 width=88) (actual time=0.655..0.659 rows=1 loops=1)
-> Bitmap Heap Scan on jtest2 (cost=88.27..13517.81 rows=5050 width=88) (actual time=0.171..0.172 rows=1 loops=1)
Recheck Cond: (jdoc @> '{"name": "N9WP5txmVu"}'::jsonb)
-> Bitmap Index Scan on idx_jtest2 (cost=0.00..85.75 rows=5050 width=0) (actual time=0.217..0.217 rows=1 loops=1)
Index Cond: (jdoc @> '{"name": "N9WP5txmVu"}'::jsonb)
Planning time: 0.151 ms
(slice0) Executor memory: 69K bytes.
(slice1) Executor memory: 628K bytes avg x 2 workers, 632K bytes max (seg1). Work_mem: 9K bytes max.
Memory used: 2047000kB
Optimizer: Postgres query optimizer
Execution time: 1.266 ms
(11 rows)
-- 使用jsonb_path_ops操作符创建索引执行
EXPLAIN ANALYZE SELECT * FROM jtest3 where jdoc @> '{"name":"N9WP5txmVu"}';
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Gather Motion 2:1 (slice1; segments: 2) (cost=84.28..13513.81 rows=10101 width=88) (actual time=0.710..0.711 rows=1 loops=1)
-> Bitmap Heap Scan on jtest3 (cost=84.28..13513.81 rows=5051 width=88) (actual time=0.179..0.181 rows=1 loops=1)
Recheck Cond: (jdoc @> '{"name": "N9WP5txmVu"}'::jsonb)
-> Bitmap Index Scan on idx_jtest3 (cost=0.00..81.75 rows=5051 width=0) (actual time=0.106..0.106 rows=1 loops=1)
Index Cond: (jdoc @> '{"name": "N9WP5txmVu"}'::jsonb)
Planning time: 0.144 ms
(slice0) Executor memory: 69K bytes.
(slice1) Executor memory: 305K bytes avg x 2 workers, 309K bytes max (seg1). Work_mem: 9K bytes max.
Memory used: 2047000kB
Optimizer: Postgres query optimizer
Execution time: 1.291 ms
(11 rows)
下面是 Python 访问的一个例子:
#! /bin/env python
import time
import json
import psycopg2
def gpquery(sql):
conn = None
try:
conn = psycopg2.connect("dbname=sanity1x2")
conn.autocommit = True
cur = conn.cursor()
cur.execute(sql)
return cur.fetchall()
except Exception as e:
if conn:
try:
conn.close()
except:
pass
time.sleep(10)
print e
return None
def main():
sql = "select obj from tj;"
#rows = Connection(host, port, user, pwd, dbname).query(sql)
rows = gpquery(sql)
for row in rows:
print json.loads(row[0])
if __name__ == "__main__":
main()
|
Encode - character encodings in Perl
use Encode qw(decode encode);
$characters = decode('UTF-8', $octets, Encode::FB_CROAK);
$octets = encode('UTF-8', $characters, Encode::FB_CROAK);
Encode consists of a collection of modules whose details are too extensive to fit in one document. This one itself explains the top-level APIs and general topics at a glance. For other topics and more details, see the documentation for these modules:
The Encode module provides the interface between Perl strings and the rest of the system. Perl strings are sequences of characters.
The repertoire of characters that Perl can represent is a superset of those defined by the Unicode Consortium. On most platforms the ordinal values of a character as returned by ord( is the S)Unicode codepoint for that character. The exceptions are platforms where the legacy encoding is some variant of EBCDIC rather than a superset of ASCII; see perlebcdic.
During recent history, data is moved around a computer in 8-bit chunks, often called "bytes" but also known as "octets" in standards documents. Perl is widely used to manipulate data of many types: not only strings of characters representing human or computer languages, but also "binary" data, being the machine's representation of numbers, pixels in an image, or just about anything.
When Perl is processing "binary data", the programmer wants Perl to process "sequences of bytes". This is not a problem for Perl: because a byte has 256 possible values, it easily fits in Perl's much larger "logical character".
A character in the range 0 .. 2**32-1 (or more); what Perl's strings are made of.
A character in the range 0..255; a special case of a Perl character.
8 bits of data, with ordinal values 0..255; term for bytes passed to or from a non-Perl context, such as a disk file, standard I/O stream, database, command-line argument, environment variable, socket etc.
$octets = encode(ENCODING, STRING[, CHECK])
Encodes the scalar value STRING from Perl's internal form into ENCODING and returns a sequence of octets. ENCODING can be either a canonical name or an alias. For encoding names and aliases, see "Defining Aliases". For CHECK, see "Handling Malformed Data".
CAVEAT: the input scalar STRING might be modified in-place depending on what is set in CHECK. See "LEAVE_SRC" if you want your inputs to be left unchanged.
For example, to convert a string from Perl's internal format into ISO-8859-1, also known as Latin1:
$octets = encode("iso-8859-1", $string);
CAVEAT: When you run $octets = encode("UTF-8", $string), then $octets might not be equal to $string. Though both contain the same data, the UTF8 flag for $octets is always off. When you encode anything, the UTF8 flag on the result is always off, even when it contains a completely valid UTF-8 string. See "The UTF8 flag" below.
If the $string is undef, then undef is returned.
str2bytes may be used as an alias for encode.
$string = decode(ENCODING, OCTETS[, CHECK])
This function returns the string that results from decoding the scalar value OCTETS, assumed to be a sequence of octets in ENCODING, into Perl's internal form. As with encode(), ENCODING can be either a canonical name or an alias. For encoding names and aliases, see "Defining Aliases"; for CHECK, see "Handling Malformed Data".
CAVEAT: the input scalar OCTETS might be modified in-place depending on what is set in CHECK. See "LEAVE_SRC" if you want your inputs to be left unchanged.
For example, to convert ISO-8859-1 data into a string in Perl's internal format:
$string = decode("iso-8859-1", $octets);
CAVEAT: When you run $string = decode("UTF-8", $octets), then $string might not be equal to $octets. Though both contain the same data, the UTF8 flag for $string is on. See "The UTF8 flag" below.
If the $string is undef, then undef is returned.
bytes2str may be used as an alias for decode.
[$obj =] find_encoding(ENCODING)
Returns the encoding object corresponding to ENCODING. Returns undef if no matching ENCODING is find. The returned object is what does the actual encoding or decoding.
$string = decode($name, $bytes);
is in fact
$string = do {
$obj = find_encoding($name);
croak qq(encoding "$name" not found) unless ref $obj;
$obj->decode($bytes);
};
with more error checking.
You can therefore save time by reusing this object as follows;
my $enc = find_encoding("iso-8859-1");
while(<>) {
my $string = $enc->decode($_);
... # now do something with $string;
}
find_encoding("latin1")->name; # iso-8859-1
See Encode::Encoding for details.
[$obj =] find_mime_encoding(MIME_ENCODING)
Returns the encoding object corresponding to MIME_ENCODING. Acts same as find_encoding() but mime_name() of returned object must match to MIME_ENCODING. So as opposite of find_encoding() canonical names and aliases are not used when searching for object.
find_mime_encoding("utf8"); # returns undef because "utf8" is not valid I<MIME_ENCODING>
find_mime_encoding("utf-8"); # returns encode object "utf-8-strict"
find_mime_encoding("UTF-8"); # same as "utf-8" because I<MIME_ENCODING> is case insensitive
find_mime_encoding("utf-8-strict"); returns undef because "utf-8-strict" is not valid I<MIME_ENCODING>
[$length =] from_to($octets, FROM_ENC, TO_ENC [, CHECK])
Converts in-place data between two encodings. The data in $octets must be encoded as octets and not as characters in Perl's internal format. For example, to convert ISO-8859-1 data into Microsoft's CP1250 encoding:
from_to($octets, "iso-8859-1", "cp1250");
and to convert it back:
from_to($octets, "cp1250", "iso-8859-1");
Because the conversion happens in place, the data to be converted cannot be a string constant: it must be a scalar variable.
from_to() returns the length of the converted string in octets on success, and undef on error.
CAVEAT: The following operations may look the same, but are not:
from_to($data, "iso-8859-1", "UTF-8"); #1
$data = decode("iso-8859-1", $data); #2
Both #1 and #2 make $data consist of a completely valid UTF-8 string, but only #2 turns the UTF8 flag on. #1 is equivalent to:
$data = encode("UTF-8", decode("iso-8859-1", $data));
See "The UTF8 flag" below.
Also note that:
from_to($octets, $from, $to, $check);
is equivalent to:
$octets = encode($to, decode($from, $octets), $check);
Yes, it does not respect the $check during decoding. It is deliberately done that way. If you need minute control, use decode followed by encode as follows:
$octets = encode($to, decode($from, $octets, $check_from), $check_to);
$octets = encode_utf8($string);
Equivalent to $octets = encode("utf8", $string). The characters in $string are encoded in Perl's internal format, and the result is returned as a sequence of octets. Because all possible characters in Perl have a (loose, not strict) utf8 representation, this function cannot fail.
WARNING: do not use this function for data exchange as it can produce not strict utf8 $octets! For strictly valid UTF-8 output use $octets = encode("UTF-8", $string).
$string = decode_utf8($octets [, CHECK]);
Equivalent to $string = decode("utf8", $octets [, CHECK]). The sequence of octets represented by $octets is decoded from (loose, not strict) utf8 into a sequence of logical characters. Because not all sequences of octets are valid not strict utf8, it is quite possible for this function to fail. For CHECK, see "Handling Malformed Data".
WARNING: do not use this function for data exchange as it can produce $string with not strict utf8 representation! For strictly valid UTF-8 $string representation use $string = decode("UTF-8", $octets [, CHECK]).
CAVEAT: the input $octets might be modified in-place depending on what is set in CHECK. See "LEAVE_SRC" if you want your inputs to be left unchanged.
use Encode;
@list = Encode->encodings();
Returns a list of canonical names of available encodings that have already been loaded. To get a list of all available encodings including those that have not yet been loaded, say:
@all_encodings = Encode->encodings(":all");
Or you can give the name of a specific module:
@with_jp = Encode->encodings("Encode::JP");
When "::" is not in the name, "Encode::" is assumed.
@ebcdic = Encode->encodings("EBCDIC");
To find out in detail which encodings are supported by this package, see Encode::Supported.
To add a new alias to a given encoding, use:
use Encode;
use Encode::Alias;
define_alias(NEWNAME => ENCODING);
After that, NEWNAME can be used as an alias for ENCODING. ENCODING may be either the name of an encoding or an encoding object.
Before you do that, first make sure the alias is nonexistent using resolve_alias(), which returns the canonical name thereof. For example:
Encode::resolve_alias("latin1") eq "iso-8859-1" # true
Encode::resolve_alias("iso-8859-12") # false; nonexistent
Encode::resolve_alias($name) eq $name # true if $name is canonical
resolve_alias() does not need use Encode::Alias; it can be imported via use Encode qw(resolve_alias).
See Encode::Alias for details.
The canonical name of a given encoding does not necessarily agree with IANA Character Set Registry, commonly seen as Content-Type: text/plain; charset=. For most cases, the canonical name works, but sometimes it does not, most notably with "utf-8-strict".WHATEVER
As of Encode version 2.21, a new method mime_name() is therefore added.
use Encode;
my $enc = find_encoding("UTF-8");
warn $enc->name; # utf-8-strict
warn $enc->mime_name; # UTF-8
See also: Encode::Encoding
If your perl supports PerlIO (which is the default), you can use a PerlIO layer to decode and encode directly via a filehandle. The following two examples are fully identical in functionality:
### Version 1 via PerlIO
open(INPUT, "< :encoding(shiftjis)", $infile)
|| die "Can't open < $infile for reading: $!";
open(OUTPUT, "> :encoding(euc-jp)", $outfile)
|| die "Can't open > $output for writing: $!";
while (<INPUT>) { # auto decodes $_
print OUTPUT; # auto encodes $_
}
close(INPUT) || die "can't close $infile: $!";
close(OUTPUT) || die "can't close $outfile: $!";
### Version 2 via from_to()
open(INPUT, "< :raw", $infile)
|| die "Can't open < $infile for reading: $!";
open(OUTPUT, "> :raw", $outfile)
|| die "Can't open > $output for writing: $!";
while (<INPUT>) {
from_to($_, "shiftjis", "euc-jp", 1); # switch encoding
print OUTPUT; # emit raw (but properly encoded) data
}
close(INPUT) || die "can't close $infile: $!";
close(OUTPUT) || die "can't close $outfile: $!";
In the first version above, you let the appropriate encoding layer handle the conversion. In the second, you explicitly translate from one encoding to the other.
Unfortunately, it may be that encodings are not PerlIO-savvy. You can check to see whether your encoding is supported by PerlIO by invoking the perlio_ok method on it:
Encode::perlio_ok("hz"); # false
find_encoding("euc-cn")->perlio_ok; # true wherever PerlIO is available
use Encode qw(perlio_ok); # imported upon request
perlio_ok("euc-jp")
The optional CHECK argument tells Encode what to do when encountering malformed data. Without CHECK, Encode::FB_DEFAULT (== 0) is assumed.
As of version 2.12, Encode supports coderef values for CHECK; see below.
NOTE: Not all encodings support this feature. Some encodings ignore the CHECK argument. For example, Encode::Unicode ignores CHECK and it always croaks on error.
I<CHECK> = Encode::FB_DEFAULT ( == 0)
If CHECK is 0, encoding and decoding replace any malformed character with a substitution character. When you encode, SUBCHAR is used. When you decode, the Unicode REPLACEMENT CHARACTER, code point U+FFFD, is used. If the data is supposed to be UTF-8, an optional lexical warning of warning category "utf8" is given.
I<CHECK> = Encode::FB_CROAK ( == 1)
If CHECK is 1, methods immediately die with an error message. Therefore, when CHECK is 1, you should trap exceptions with eval{}, unless you really want to let it die.
I<CHECK> = Encode::FB_QUIET
If CHECK is set to Encode::FB_QUIET, encoding and decoding immediately return the portion of the data that has been processed so far when an error occurs. The data argument is overwritten with everything after that point; that is, the unprocessed portion of the data. This is handy when you have to call decode repeatedly in the case where your source data may contain partial multi-byte character sequences, (that is, you are reading with a fixed-width buffer). Here's some sample code to do exactly that:
my($buffer, $string) = ("", "");
while (read($fh, $buffer, 256, length($buffer))) {
$string .= decode($encoding, $buffer, Encode::FB_QUIET);
# $buffer now contains the unprocessed partial character
}
I<CHECK> = Encode::FB_WARN
This is the same as FB_QUIET above, except that instead of being silent on errors, it issues a warning. This is handy for when you are debugging.
For encodings that are implemented by the Encode::XS module, CHECK == Encode::FB_PERLQQ puts encode and decode into perlqq fallback mode.
When you decode, \x is inserted for a malformed character, where HHHH is the hex representation of the octet that could not be decoded to utf8. When you encode, \x{ will be inserted, where HHHH}HHHH is the Unicode code point (in any number of hex digits) of the character that cannot be found in the character repertoire of the encoding.
The HTML/XML character reference modes are about the same. In place of \x{, HTML uses HHHH}&# where NNN;NNN is a decimal number, and XML uses &#x where HHHH;HHHH is the hexadecimal number.
In Encode 2.10 or later, LEAVE_SRC is also implied.
These modes are all actually set via a bitmask. Here is how the FB_ constants are laid out. You can import the XXXFB_ constants via XXXuse Encode qw(:fallbacks), and you can import the generic bitmask constants via use Encode qw(:fallback_all).
FB_DEFAULT FB_CROAK FB_QUIET FB_WARN FB_PERLQQDIE_ON_ERR 0x0001 XWARN_ON_ERR 0x0002 XRETURN_ON_ERR 0x0004 X XLEAVE_SRC 0x0008 XPERLQQ 0x0100 XHTMLCREF 0x0200XMLCREF 0x0400Encode::LEAVE_SRC
If the Encode::LEAVE_SRC bit is not set but CHECK is set, then the source string to encode() or decode() will be overwritten in place. If you're not interested in this, then bitwise-OR it with the bitmask.
As of Encode 2.12, CHECK can also be a code reference which takes the ordinal value of the unmapped character as an argument and returns octets that represent the fallback character. For instance:
$ascii = encode("ascii", $utf8, sub{ sprintf "<U+%04X>", shift });
Acts like FB_PERLQQ but U+XXXX is used instead of \x{.XXXX}
Fallback for decode must return decoded string (sequence of characters) and takes a list of ordinal values as its arguments. So for example if you wish to decode octets as UTF-8, and use ISO-8859-15 as a fallback for bytes that are not valid UTF-8, you could write
$str = decode 'UTF-8', $octets, sub {
my $tmp = join '', map chr, @_;
return decode 'ISO-8859-15', $tmp;
};
To define a new encoding, use:
use Encode qw(define_encoding);
define_encoding($object, CANONICAL_NAME [, alias...]);
CANONICAL_NAME will be associated with $object. The object should provide the interface described in Encode::Encoding. If more than two arguments are provided, additional arguments are considered aliases for $object.
See Encode::Encoding for details.
Before the introduction of Unicode support in Perl, The eq operator just compared the strings represented by two scalars. Beginning with Perl 5.8, eq compares two strings with simultaneous consideration of the UTF8 flag. To explain why we made it so, I quote from page 402 of Programming Perl, 3rd ed.
Old byte-oriented programs should not spontaneously break on the old byte-oriented data they used to work on.
Old byte-oriented programs should magically start working on the new character-oriented data when appropriate.
Programs should run just as fast in the new character-oriented mode as in the old byte-oriented mode.
Perl should remain one language, rather than forking into a byte-oriented Perl and a character-oriented Perl.
When Programming Perl, 3rd ed. was written, not even Perl 5.6.0 had been born yet, many features documented in the book remained unimplemented for a long time. Perl 5.8 corrected much of this, and the introduction of the UTF8 flag is one of them. You can think of there being two fundamentally different kinds of strings and string-operations in Perl: one a byte-oriented mode for when the internal UTF8 flag is off, and the other a character-oriented mode for when the internal UTF8 flag is on.
This UTF8 flag is not visible in Perl scripts, exactly for the same reason you cannot (or rather, you don't have to) see whether a scalar contains a string, an integer, or a floating-point number. But you can still peek and poke these if you will. See the next section.
The following API uses parts of Perl's internals in the current implementation. As such, they are efficient but may change in a future release.
is_utf8(STRING [, CHECK])
[INTERNAL] Tests whether the UTF8 flag is turned on in the STRING. If CHECK is true, also checks whether STRING contains well-formed UTF-8. Returns true if successful, false otherwise.
Typically only necessary for debugging and testing. Don't use this flag as a marker to distinguish character and binary data, that should be decided for each variable when you write your code.
CAVEAT: If STRING has UTF8 flag set, it does NOT mean that STRING is UTF-8 encoded and vice-versa.
As of Perl 5.8.1, utf8 also has the utf8::is_utf8 function.
_utf8_on(STRING)
[INTERNAL] Turns the STRING's internal UTF8 flag on. The STRING is not checked for containing only well-formed UTF-8. Do not use this unless you know with absolute certainty that the STRING holds only well-formed UTF-8. Returns the previous state of the UTF8 flag (so please don't treat the return value as indicating success or failure), or undef if STRING is not a string.
NOTE: For security reasons, this function does not work on tainted values.
_utf8_off(STRING)
[INTERNAL] Turns the STRING's internal UTF8 flag off. Do not use frivolously. Returns the previous state of the UTF8 flag, or undef if STRING is not a string. Do not treat the return value as indicative of success or failure, because that isn't what it means: it is only the previous setting.
NOTE: For security reasons, this function does not work on tainted values.
....We now view strings not as sequences of bytes, but as sequences
of numbers in the range 0 .. 2**32-1 (or in the case of 64-bit
computers, 0 .. 2**64-1) -- Programming Perl, 3rd ed.
That has historically been Perl's notion of UTF-8, as that is how UTF-8 was first conceived by Ken Thompson when he invented it. However, thanks to later revisions to the applicable standards, official UTF-8 is now rather stricter than that. For example, its range is much narrower (0 .. 0x10_FFFF to cover only 21 bits instead of 32 or 64 bits) and some sequences are not allowed, like those used in surrogate pairs, the 31 non-character code points 0xFDD0 .. 0xFDEF, the last two code points in any plane (0xXX_FFFE and 0xXX_FFFF), all non-shortest encodings, etc.
The former default in which Perl would always use a loose interpretation of UTF-8 has now been overruled:
From: Larry Wall <larry@wall.org>
Date: December 04, 2004 11:51:58 JST
To: perl-unicode@perl.org
Subject: Re: Make Encode.pm support the real UTF-8
Message-Id: <20041204025158.GA28754@wall.org>
On Fri, Dec 03, 2004 at 10:12:12PM +0000, Tim Bunce wrote:
: I've no problem with 'utf8' being perl's unrestricted uft8 encoding,
: but "UTF-8" is the name of the standard and should give the
: corresponding behaviour.
For what it's worth, that's how I've always kept them straight in my
head.
Also for what it's worth, Perl 6 will mostly default to strict but
make it easy to switch back to lax.
Larry
Got that? As of Perl 5.8.7, "UTF-8" means UTF-8 in its current sense, which is conservative and strict and security-conscious, whereas "utf8" means UTF-8 in its former sense, which was liberal and loose and lax. Encode version 2.10 or later thus groks this subtle but critically important distinction between "UTF-8" and "utf8".
encode("utf8", "\x{FFFF_FFFF}", 1); # okay
encode("UTF-8", "\x{FFFF_FFFF}", 1); # croaks
In the Encode module, "UTF-8" is actually a canonical name for "utf-8-strict". That hyphen between the "UTF" and the "8" is critical; without it, Encode goes "liberal" and (perhaps overly-)permissive:
find_encoding("UTF-8")->name # is 'utf-8-strict'
find_encoding("utf-8")->name # ditto. names are case insensitive
find_encoding("utf_8")->name # ditto. "_" are treated as "-"
find_encoding("UTF8")->name # is 'utf8'.
Perl's internal UTF8 flag is called "UTF8", without a hyphen. It indicates whether a string is internally encoded as "utf8", also without a hyphen.
Encode::Encoding, Encode::Supported, Encode::PerlIO, encoding, perlebcdic, "open" in perlfunc, perlunicode, perluniintro, perlunifaq, perlunitut utf8, the Perl Unicode Mailing List http://lists.perl.org/list/perl-unicode.html
This project was originated by the late Nick Ing-Simmons and later maintained by Dan Kogai <dankogai@cpan.org>. See AUTHORS for a full list of people involved. For any questions, send mail to <perl-unicode@perl.org> so that we can all share.
While Dan Kogai retains the copyright as a maintainer, credit should go to all those involved. See AUTHORS for a list of those who submitted code to the project.
Copyright 2002-2014 Dan Kogai <dankogai@cpan.org>.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. |
为了驱动步进电机,找了N多源码,有些太过高深看不懂,所以找了个相对简单的(这里)改了一下,实测能驱动28BYJ-48,采用的ULN2003APG驱动板。
import time
from machine import Pin
speed = 2
STEPER_ROUND=512 #转动一圈(360度)的周期
ANGLE_PER_ROUND=STEPER_ROUND/360 #转动1度的周期
#print('ANGLE_PER_ROUND:',ANGLE_PER_ROUND)
p1 = Pin(16, Pin.OUT, value=0)
p2 = Pin(14, Pin.OUT, value=0)
p3 = Pin(12, Pin.OUT, value=0)
p4 = Pin(13, Pin.OUT, value=0)
def Front():
global speed
p1.value(1)
p2.value(1)
p3.value(0)
p4.value(0)
time.sleep_ms(speed)
p1.value(0)
p2.value(1)
p3.value(1)
p4.value(0)
time.sleep_ms(speed)
p1.value(0)
p2.value(0)
p3.value(1)
p4.value(1)
time.sleep_ms(speed)
p1.value(1)
p2.value(0)
p3.value(0)
p4.value(1)
time.sleep_ms(speed)
def Back():
global speed
p1.value(1)
p2.value(1)
p3.value(0)
p4.value(0)
time.sleep_ms(speed)
p1.value(1)
p2.value(0)
p3.value(0)
p4.value(1)
time.sleep_ms(speed)
p1.value(0)
p2.value(0)
p3.value(1)
p4.value(1)
time.sleep_ms(speed)
p1.value(0)
p2.value(1)
p3.value(1)
p4.value(0)
time.sleep_ms(speed)
def Stop():
p1.value(0)
p2.value(0)
p3.value(0)
p4.value(0)
def Run(angle):
global ANGLE_PER_ROUND
val=ANGLE_PER_ROUND*abs(angle)
if(angle>0):
for i in range(0,val):
Front()
else:
for i in range(0,val):
Back()
angle = 0
Stop()
def main():
SteperRun(180)
SteperRun(-180)
|
bert-base-en-fr-it-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
To generate other smaller versions of multilingual transformers please visit our Github repo.
How to cite
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
Contact
Please contact amine@geotrend.fr for any question, feedback or request.
Downloads last month
0 |
1. 为什么要修改?
2. 修改步骤:
2.1 添加文件vnpy\usertools\trade_hour.py
内容如下:
"""
本文件主要实现合约的交易时间段
作者:hxxjava
日期:2020-8-1
"""
from typing import Callable,List,Dict, Tuple, Union
from enum import Enum
import datetime
import pytz
CHINA_TZ = pytz.timezone("Asia/Shanghai")
from vnpy.trader.utility import extract_vt_symbol
from vnpy.trader.constant import Interval
from rqdatac.utils import to_date
import rqdatac as rq
def get_listed_date(symbol:str):
'''
获得上市日期
'''
info = rq.instruments(symbol)
return to_date(info.listed_date)
def get_de_listed_date(symbol:str):
'''
获得交割日期
'''
info = rq.instruments(symbol)
return to_date(info.de_listed_date)
class Timeunit(Enum):
"""
时间单位
"""
SECOND = '1s'
MINUTE = '1m'
HOUR = '1h'
class TradeHours(object):
""" 合约交易时间段 """
def __init__(self,symbol:str):
self.symbol = symbol.upper()
self.init()
def init(self):
"""
初始化交易日字典及交易时间段数据列表
"""
self.listed_date = get_listed_date(self.symbol)
self.de_listed_date = get_de_listed_date(self.symbol)
self.trade_date_index = {} # 合约的交易日索引字典
self.trade_index_date = {} # 交易天数与交易日字典
trade_dates = rq.get_trading_dates(self.listed_date,self.de_listed_date) # 合约的所有的交易日
days = 0
for td in trade_dates:
self.trade_date_index[td] = days
self.trade_index_date[days] = td
days += 1
trading_hours = rq.get_trading_hours(self.symbol,date=self.listed_date,frequency='tick',expected_fmt='datetime')
self.time_dn_pairs = self._get_trading_times_dn(trading_hours)
trading_hours0 = [(CHINA_TZ.localize(start),CHINA_TZ.localize(stop)) for start,stop in trading_hours]
self.trade_date_index[self.listed_date] = (0,trading_hours0)
for day in range(1,days):
td = self.trade_index_date[day]
trade_datetimes = []
for (start,dn1),(stop,dn2) in self.time_dn_pairs:
#start:开始时间,dn1:相对交易日前推天数,
#stop :开始时间,dn2:相对开始时间后推天数
d = self.trade_index_date[day+dn1]
start_dt = CHINA_TZ.localize(datetime.datetime.combine(d,start))
stop_dt = CHINA_TZ.localize(datetime.datetime.combine(d,stop))
trade_datetimes.append((start_dt,stop_dt+datetime.timedelta(days=dn2)))
self.trade_date_index[td] = (day,trade_datetimes)
def _get_trading_times_dn(self,trading_hours:List[Tuple[datetime.datetime,datetime.datetime]]):
"""
交易时间跨天处理,不推荐外部使用 。
产生的结果:[((start1,dn11),(stop1,dn21)),((start2,dn12),(stop2,dn22)),...,((startN,dn1N),(stopN,dn2N))]
其中:
startN:开始时间,dn1N:相对交易日前推天数,
stopN:开始时间,dn2N:相对开始时间后推天数
"""
ilen = len(trading_hours)
if ilen == 0:
return []
start_stops = []
for start,stop in trading_hours:
start_stops.insert(0,(start.time(),stop.time()))
pre_start,pre_stop = start_stops[0]
dn1 = 0
dn2 = 1 if pre_start > pre_stop else 0
time_dn_pairs = [((pre_start,dn1),(pre_stop,dn2))]
for start,stop in start_stops[1:]:
if start > pre_start:
dn1 -= 1
dn2 = 1 if start > stop else 0
time_dn_pairs.insert(0,((start,dn1),(stop,dn2)))
pre_start,pre_stop = start,stop
return time_dn_pairs
def get_date_tradetimes(self,date:datetime.date):
"""
得到合约date日期的交易时间段
"""
idx,trade_times = self.trade_date_index.get(date,(None,[]))
return idx,trade_times
def get_trade_datetimes(self,dt:datetime,allday:bool=False):
"""
得到合约date日期的交易时间段
"""
# 得到最早的交易时间
idx0,trade_times0 = self.get_date_tradetimes(self.listed_date)
start0,stop0 = trade_times0[0]
if dt < start0:
return None,[]
# 首先找到dt日期自上市以来的交易天数
date,dn = dt.date(),0
days = None
while date < self.de_listed_date:
days,ths = self.trade_date_index.get(date,(None,[]))
if not days:
dn += 1
date = (dt+datetime.timedelta(days=dn)).date()
else:
break
# 如果超出交割日也没有找到,那这就不是一个有效的交易时间
if days is None:
return (None,[])
index_3 = [days,days+1,days-1] # 前后三天的
date_3d = []
for day in index_3:
date = self.trade_index_date.get(day,None)
date_3d.append(date)
# print(date_3d)
for date in date_3d:
if not date:
# print(f"{date} is not trade date")
continue
idx,trade_dts = self.get_date_tradetimes(date)
# print(f"{date} tradetimes {trade_dts}")
ilen = len(trade_dts)
if ilen > 0:
start0,stop = trade_dts[0] # start0 是date交易日的开始时间
start,stop0 = trade_dts[-1]
if dt<start0 or dt>stop0:
continue
for start,stop in trade_dts:
if dt>=start and dt < stop:
if allday:
return idx,trade_dts
else:
return idx,[(start,stop)]
return None,[]
def get_trade_time_perday(self):
"""
计算每日的交易总时长(单位:分钟)
"""
TTPD = datetime.timedelta(0,0,0)
datetimes = []
today = datetime.datetime.now().date()
for (start,dn1),(stop,dn2) in self.time_dn_pairs:
start_dt = CHINA_TZ.localize(datetime.datetime.combine(today,start)) + datetime.timedelta(days=dn1)
stop_dt = CHINA_TZ.localize(datetime.datetime.combine(today,stop)) + datetime.timedelta(days=dn2)
time_delta = stop_dt - start_dt
TTPD = TTPD + time_delta
return int(TTPD.seconds/60)
def get_trade_time_inday(self,dt:datetime,unit:Timeunit=Timeunit.MINUTE):
"""
计算dt在交易日内的分钟数
unit: '1s':second;'1m':minute;'1h';1h
"""
TTID = datetime.timedelta(0,0,0)
day,trade_times = self.get_trade_datetimes(dt,allday=True)
if not trade_times:
return None
for start,stop in trade_times:
if dt > stop:
time_delta = stop - start
TTID += time_delta
elif dt > start:
time_delta = dt - start
TTID += time_delta
break
else:
break
if unit == Timeunit.SECOND:
return TTID.seconds
elif unit == Timeunit.MINUTE:
return int(TTID.seconds/60)
elif unit == Timeunit.HOUR:
return int(TTID.seconds/3600)
else:
return TTID
def get_day_tradetimes(self,dt:datetime):
"""
得到合约日盘的交易时间段
"""
index,trade_times = self.get_trade_datetimes(dt,allday=True)
trade_times1 = []
if trade_times:
for start_dt,stop_dt in trade_times:
if start_dt.time() < datetime.time(18,0,0):
trade_times1.append((start_dt,stop_dt))
return index,trade_times1
return (index,trade_times1)
def get_night_tradetimes(self,dt:datetime):
"""
得到合约夜盘的交易时间段
"""
index,trade_times = self.get_trade_datetimes(dt,allday=True)
trade_times1 = []
if trade_times:
for start_dt,stop_dt in trade_times:
if start_dt.time() > datetime.time(18,0,0):
trade_times1.append((start_dt,stop_dt))
return index,trade_times1
return (index,trade_times1)
def convet_to_datetime(self,day:int,minutes:int):
"""
计算minutes在第day交易日内的datetime形式的时间
"""
date = self.trade_index_date.get(day,None)
if date is None:
return None
idx,trade_times = self.trade_date_index.get(date,(None,[]))
if not trade_times: # 不一定必要
return None
for (start,stop) in trade_times:
timedelta = stop - start
if minutes < int(timedelta.seconds/60):
return start + datetime.timedelta(minutes=minutes)
else:
minutes -= int(timedelta.seconds/60)
return None
def get_bar_window(self,dt:datetime,window:int,interval:Interval=Interval.MINUTE):
"""
计算dt所在K线的起止时间
"""
bar_windows = (None,None)
day,trade_times = self.get_trade_datetimes(dt,allday=True)
if not trade_times:
# print(f"day={day} trade_times={trade_times}")
return bar_windows
# 求每个交易日的交易时间分钟数
TTPD = self.get_trade_time_perday()
# 求dt在交易日内的分钟数
TTID = self.get_trade_time_inday(dt,unit=Timeunit.MINUTE)
# 得到dt时刻K线的起止时间
total_minites = day*TTPD + TTID
# 计算K线宽度(分钟数)
if interval == Interval.MINUTE:
bar_width = window
elif interval == Interval.HOUR:
bar_width = 60*window
elif interval == Interval.DAILY:
bar_width = TTPD*window
elif interval == Interval.WEEKLY:
bar_width = TTPD*window*5
else:
return bar_windows
# 求K线的开始时间的和结束的分钟形式
start_m = int(total_minites/bar_width)*bar_width
stop_m = start_m + bar_width
# 计算K开始时间的datetime形式
start_d = int(start_m / TTPD)
minites = start_m % TTPD
start_dt = self.convet_to_datetime(start_d,minites)
# print(f"start_d={start_d} minites={minites}---->{start_dt}")
# 计算K结束时间的datetime形式
stop_d = int(stop_m / TTPD)
minites = stop_m % TTPD
stop_dt = self.convet_to_datetime(stop_d,minites)
# print(f"stop_d={stop_d} minites={minites}---->{stop_dt}")
return start_dt,stop_dt
def get_date_start_stop(self,dt:datetime):
"""
获得dt所在交易日的开始和停止时间
"""
index,trade_times = self.get_trade_datetimes(dt,allday=True)
if trade_times:
valid_dt = False
for t1,t2 in trade_times:
if t1 < dt and dt < t2:
valid_dt = True
break
if valid_dt:
start_dt = trade_times[0][0]
stop_dt = trade_times[-1][1]
return True,(start_dt,stop_dt)
return False,(None,None)
def get_day_start_stop(self,dt:datetime):
"""
获得dt所在交易日日盘的开始和停止时间
"""
index,trade_times = self.get_day_tradetimes(dt)
if trade_times:
valid_dt = False
for t1,t2 in trade_times:
if t1 < dt and dt < t2:
valid_dt = True
break
if valid_dt:
start_dt = trade_times[0][0]
stop_dt = trade_times[-1][1]
return True,(start_dt,stop_dt)
return False,(None,None)
def get_night_start_stop(self,dt:datetime):
"""
获得dt所在交易日夜盘的开始和停止时间
"""
index,trade_times = self.get_night_tradetimes(dt)
if trade_times:
valid_dt = False
for t1,t2 in trade_times:
if t1 < dt and dt < t2:
valid_dt = True
break
if valid_dt:
start_dt = trade_times[0][0]
stop_dt = trade_times[-1][1]
return True,(start_dt,stop_dt)
return False,(None,None)
if __name__ == "__main__":
rq.init('xxxxx','******',("rqdatad-pro.ricequant.com",16011))
# vt_symbols = ["rb2010.SHFE","ag2012.SHFE","i2010.DCE"]
vt_symbols = ["ag2012.SHFE"]
date0 = datetime.date(2020,8,31)
dt0 = CHINA_TZ.localize(datetime.datetime(2020,8,31,9,20,15))
for vt_symbol in vt_symbols:
symbol,exchange = extract_vt_symbol(vt_symbol)
th = TradeHours(symbol)
# trade_hours = th.get_date_tradetimes(date0)
# print(f"\n{vt_symbol} {date0} trade_hours={trade_hours}")
days,trade_hours = th.get_trade_datetimes(dt0,allday=True)
print(f"\n{vt_symbol} {dt0} days:{days} trade_hours={trade_hours}")
if trade_hours:
day_start = trade_hours[0][0]
day_end = trade_hours[-1][1]
print(f"day_start={day_start} day_end={day_end}")
exit_time = day_end + datetime.timedelta(minutes=-5)
print(f"exit_time={exit_time}")
dt1 = CHINA_TZ.localize(datetime.datetime(2020,8,31,9,20,15))
dt2 = CHINA_TZ.localize(datetime.datetime(2020,9,1,1,1,15))
for dt in [dt1,dt2]:
in_trade,(start,stop) = th.get_date_start_stop(dt)
if (in_trade):
print(f"\n{vt_symbol} 时间 {dt} 交易日起止:{start,stop}")
else:
print(f"\n{vt_symbol} 时间 {dt} 非交易时间")
in_day,(start,stop) = th.get_day_start_stop(dt)
if (in_day):
print(f"\n{vt_symbol} 时间 {dt} 日盘起止:{start,stop}")
else:
print(f"\n{vt_symbol} 时间 {dt} 非日盘时间")
in_night,(start,stop) = th.get_night_start_stop(dt)
if in_night:
print(f"\n{vt_symbol} 时间 {dt} 夜盘起止:{start,stop}")
else:
print(f"\n{vt_symbol} 时间 {dt} 非夜盘时间")
2.2 修改策略文件 RBreakerStrategy.py
代码如下:
from datetime import datetime,time,timedelta
from vnpy.app.cta_strategy import (
CtaTemplate,
StopOrder,
TickData,
BarData,
TradeData,
OrderData,
BarGenerator,
ArrayManager
)
from vnpy.trader.utility import extract_vt_symbol
from vnpy.usertools.trade_hour import TradeHours
class RBreakStrategy2(CtaTemplate):
""""""
author = "KeKe"
setup_coef = 0.25
break_coef = 0.2
enter_coef_1 = 1.07
enter_coef_2 = 0.07
fixed_size = 1
donchian_window = 30
trailing_long = 0.4
trailing_short = 0.4
multiplier = 3
buy_break = 0 # 突破买入价
sell_setup = 0 # 观察卖出价
sell_enter = 0 # 反转卖出价
buy_enter = 0 # 反转买入价
buy_setup = 0 # 观察买入价
sell_break = 0 # 突破卖出价
intra_trade_high = 0
intra_trade_low = 0
day_high = 0
day_open = 0
day_close = 0
day_low = 0
tend_high = 0
tend_low = 0
parameters = ["setup_coef", "break_coef", "enter_coef_1", "enter_coef_2", "fixed_size", "donchian_window"]
variables = ["buy_break", "sell_setup", "sell_enter", "buy_enter", "buy_setup", "sell_break"]
def __init__(self, cta_engine, strategy_name, vt_symbol, setting):
""""""
super(RBreakStrategy2, self).__init__(
cta_engine, strategy_name, vt_symbol, setting
)
self.bg = BarGenerator(self.on_bar)
self.am = ArrayManager()
self.bars = []
symbol,exchange = vt_symbol.split('.')
self.trade_hour = TradeHours(symbol)
self.trade_datetimes = None
self.exit_time = None
def on_init(self):
"""
Callback when strategy is inited.
"""
self.write_log("策略初始化")
self.load_bar(10)
def on_start(self):
"""
Callback when strategy is started.
"""
self.write_log("策略启动")
def on_stop(self):
"""
Callback when strategy is stopped.
"""
self.write_log("策略停止")
def on_tick(self, tick: TickData):
"""
Callback of new tick data update.
"""
self.bg.update_tick(tick)
def is_new_day(self,dt:datetime):
"""
判断dt时间是否在当天的交易时间段内
"""
if not self.trade_datetimes:
return True
day_start = self.trade_datetimes[0][0]
day_end = self.trade_datetimes[-1][1]
if day_start<=dt and dt < day_end:
return False
return True
def on_bar(self, bar: BarData):
"""
Callback of new bar data update.
"""
self.cancel_all()
am = self.am
am.update_bar(bar)
if not am.inited:
return
# 判断是否是下一交易日
self.new_day = self.is_new_day(bar.datetime)
if self.new_day:
# 计算下一交易日的交易时间段
days,self.trade_datetimes = self.trade_hour.get_trade_datetimes(bar.datetime,allday=True)
# 计算退出时间
# print(f"trade_datetimes={self.trade_datetimes}")
if self.trade_datetimes:
day_end = self.trade_datetimes[-1][1]
self.exit_time = day_end + timedelta(minutes=-5)
if not self.trade_datetimes:
# 不是个有效的K线,不可以处理,
# 为什么会有K线推送?因为非交易时段接口的行为是不可理喻的
return
self.bars.append(bar)
if len(self.bars) <= 2:
return
else:
self.bars.pop(0)
last_bar = self.bars[-2]
# New Day
if self.new_day: # 如果是新交易日
if self.day_open:
self.buy_setup = self.day_low - self.setup_coef * (self.day_high - self.day_close) # 观察买入价
self.sell_setup = self.day_high + self.setup_coef * (self.day_close - self.day_low) # 观察卖出价
self.buy_enter = (self.enter_coef_1 / 2) * (self.day_high + self.day_low) - self.enter_coef_2 * self.day_high # 反转买入价
self.sell_enter = (self.enter_coef_1 / 2) * (self.day_high + self.day_low) - self.enter_coef_2 * self.day_low # 反转卖出价
self.buy_break = self.buy_setup + self.break_coef * (self.sell_setup - self.buy_setup) # 突破买入价
self.sell_break = self.sell_setup - self.break_coef * (self.sell_setup - self.buy_setup) # 突破卖出价
self.day_open = bar.open_price
self.day_high = bar.high_price
self.day_close = bar.close_price
self.day_low = bar.low_price
# Today
else:
self.day_high = max(self.day_high, bar.high_price)
self.day_low = min(self.day_low, bar.low_price)
self.day_close = bar.close_price
if not self.sell_setup:
return
self.tend_high, self.tend_low = am.donchian(self.donchian_window)
if bar.datetime < self.exit_time:
if self.pos == 0:
self.intra_trade_low = bar.low_price
self.intra_trade_high = bar.high_price
if self.tend_high > self.sell_setup:
long_entry = max(self.buy_break, self.day_high)
self.buy(long_entry, self.fixed_size, stop=True)
self.short(self.sell_enter, self.multiplier * self.fixed_size, stop=True)
elif self.tend_low < self.buy_setup:
short_entry = min(self.sell_break, self.day_low)
self.short(short_entry, self.fixed_size, stop=True)
self.buy(self.buy_enter, self.multiplier * self.fixed_size, stop=True)
elif self.pos > 0:
self.intra_trade_high = max(self.intra_trade_high, bar.high_price)
long_stop = self.intra_trade_high * (1 - self.trailing_long / 100)
self.sell(long_stop, abs(self.pos), stop=True)
elif self.pos < 0:
self.intra_trade_low = min(self.intra_trade_low, bar.low_price)
short_stop = self.intra_trade_low * (1 + self.trailing_short / 100)
self.cover(short_stop, abs(self.pos), stop=True)
# Close existing position
else:
if self.pos > 0:
self.sell(bar.close_price * 0.99, abs(self.pos))
elif self.pos < 0:
self.cover(bar.close_price * 1.01, abs(self.pos))
self.put_event()
def on_order(self, order: OrderData):
"""
Callback of new order data update.
"""
pass
def on_trade(self, trade: TradeData):
"""
Callback of new trade data update.
"""
self.put_event()
def on_stop_order(self, stop_order: StopOrder):
"""
Callback of stop order update.
"""
pass
|
I am trying to make group and fill it with layers in pyQGIS.
Piece of my code:
from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import QFileInfo
from qgis.core import *
from qgis.utils import iface
group = iface.legendInterface().addGroup( 'abc')
group.setName("Group X")
But I got only error AttributeError: 'int' object has no attribute 'setName'
May I import some other class or where is problem? |
本文将介绍在 M M D e t e c t i o n {\rm MMDetection} MMDetection中实现 F C O S {\rm FCOS} FCOS的几个关键类的实现细节,上一文已经列出了涉及到的几个类。
图1:FCOS结构
链接, F C O S {\rm FCOS} FCOS是一种单阶段的检测器,在 M M D e t e c t i o n {\rm MMDetection} MMDetection中的单阶段检测器继承自类SingleStageDetector,其基类是BaseDetector,该类的主要内容如下:
class BaseDetector(nn.Module, metaclass=ABCMeta):
"""Base class for detectors."""
def __init__(self):
super(BaseDetector, self).__init__()
self.fp16_enabled = False # FP16,默认为False
@property
def with_neck(self):
# 该模型是否具有网络颈,其他还有类似的函数如with_shared_head、with_bbox和with_mask等
# property装饰器是Python中的语法,后续可以通过类对象.函数名的形式调用类成员
return hasattr(self, 'neck') and self.neck is not None
@abstractmethod # 抽象方法,继承该类的子类必须实现该方法且该类无法实例化
def extract_feat(self, imgs):
pass
def extract_feats(self, imgs): # 与上一方法不同的是该方法从多幅图像提取特征
assert isinstance(imgs, list) # 图像以列表的形式输入
return [self.extract_feat(img) for img in imgs] # 调用extract_feat并生成结果列表
@abstractmethod
def forward_train(self, imgs, img_metas, **kwargs):
# imgs: (N, C, H, W)
# img_metas: 图像信息,包括img_shape、scale_factor等
# **kwargs: 其他参数
pass
@abstractmethod # 测试
def simple_test(self, img, img_metas, **kwargs):
pass
@abstractmethod # 测试时使用数据增强
def aug_test(self, imgs, img_metas, **kwargs):
pass
def init_weights(self, pretrained=None): # 权重初始化
if pretrained is not None:
logger = get_root_logger() # 打印载入预训练模型的信息
print_log(f'load model from: {pretrained}', logger=logger)
def forward_test(self, imgs, img_metas, **kwargs):
for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: # 类型判断
if not isinstance(var, list):
raise TypeError(f'{name} must be a list, but got {type(var)}')
num_augs = len(imgs) # 该批次图像数量
if num_augs != len(img_metas):
raise ValueError(f'num of augmentations ({len(imgs)}) '
f'!= num of image meta ({len(img_metas)})')
# 仅支持每批次一副图像
if num_augs == 1:
if 'proposals' in kwargs:
kwargs['proposals'] = kwargs['proposals'][0]
return self.simple_test(imgs[0], img_metas[0], **kwargs)
else:
assert imgs[0].size(0) == 1, 'aug test does not support inference with batch size ' \
f'{imgs[0].size(0)}'
assert 'proposals' not in kwargs
return self.aug_test(imgs, img_metas, **kwargs)
@auto_fp16(apply_to=('img', ))
def forward(self, img, img_metas, return_loss=True, **kwargs):
if return_loss:
return self.forward_train(img, img_metas, **kwargs)
else:
return self.forward_test(img, img_metas, **kwargs)
def _parse_losses(self, losses):
# 解析网络的输出损失,并将其保存为字典
log_vars = OrderedDict()
for loss_name, loss_value in losses.items():
if isinstance(loss_value, torch.Tensor):
log_vars[loss_name] = loss_value.mean()
elif isinstance(loss_value, list):
log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)
else:
raise TypeError('{loss_name} is not a tensor or list of tensors')
# 计算总损失值
loss = sum(_value for _key, _value in log_vars.items() if 'loss' in _key)
log_vars['loss'] = loss
for loss_name, loss_value in log_vars.items():
# 分布式训练时调用all_reduce函数
if dist.is_available() and dist.is_initialized():
loss_value = loss_value.data.clone()
dist.all_reduce(loss_value.div_(dist.get_world_size()))
log_vars[loss_name] = loss_value.item()
# 返回损失信息
return loss, log_vars
def train_step(self, data, optimizer):
# 获得损失信息
losses = self(**data)
loss, log_vars = self._parse_losses(losses)
# 设置输出内容
outputs = dict(loss=loss, log_vars=log_vars, num_samples=len(data['img_metas']))
# 返回
return outputs
def val_step(self, data, optimizer):
losses = self(**data)
loss, log_vars = self._parse_losses(losses)
outputs = dict(
loss=loss, log_vars=log_vars, num_samples=len(data['img_metas']))
return outputs
def show_result(self, img, result, score_thr=0.3, bbox_color='green', text_color='green',
thickness=1, font_scale=0.5, win_name='', show=False, wait_time=0, out_file=None):
# 读取图像并拷贝
img = mmcv.imread(img)
img = img.copy()
# 是否显示分割结果
if isinstance(result, tuple):
bbox_result, segm_result = result
if isinstance(segm_result, tuple):
segm_result = segm_result[0]
else:
bbox_result, segm_result = result, None
# 获得边界框坐标
bboxes = np.vstack(bbox_result)
# 获得边界框标签
labels = [
np.full(bbox.shape[0], i, dtype=np.int32)
for i, bbox in enumerate(bbox_result)
]
labels = np.concatenate(labels)
# 如果指定了out_file,则不在当前窗口显示检测结果
if out_file is not None:
show = False
# 绘制检测结果信息
mmcv.imshow_det_bboxes(
img, # 图像
bboxes, # 边界框坐标
labels, # 边界框标签
class_names=self.CLASSES, # 类别名
score_thr=score_thr, # 阈值
bbox_color=bbox_color, # 边界框颜色
text_color=text_color, # 文本颜色
thickness=thickness, # 字体粗细
font_scale=font_scale, # 字体格式
win_name=win_name, # 窗口名称
show=show, # 是否显示检测结果
wait_time=wait_time, # 设置等待时间
out_file=out_file) # 是否保存结果
if not (show or out_file):
return img
类BaseDetector的派生类SingleStageDetector的主要部分如下:
@DETECTORS.register_module() # 使用注册器注册DETECTORS
class SingleStageDetector(BaseDetector): # 单阶段检测器根据backbone提取的特征直接预测目标的边界框的和类别
def __init__(self, backbone, neck=None, bbox_head=None, train_cfg=None, test_cfg=None,
pretrained=None):
super(SingleStageDetector, self).__init__()
self.backbone = build_backbone(backbone) # 调用build函数生成backbone
if neck is not None: # 如果需要生成neck
self.neck = build_neck(neck)
bbox_head.update(train_cfg=train_cfg) # 更新
bbox_head.update(test_cfg=test_cfg) # 更新
self.bbox_head = build_head(bbox_head) # 调用build函数生成head
self.train_cfg = train_cfg # 训练配置信息
self.test_cfg = test_cfg # 测试配置信息
self.init_weights(pretrained=pretrained) # 预训练模型
def init_weights(self, pretrained=None):
# 初始化backbone权重
super(SingleStageDetector, self).init_weights(pretrained)
self.backbone.init_weights(pretrained=pretrained)
# 初始化neck权重
if self.with_neck:
if isinstance(self.neck, nn.Sequential):
for m in self.neck:
m.init_weights()
else:
self.neck.init_weights()
# 初始化head权重
self.bbox_head.init_weights()
def extract_feat(self, img): # 该方法是基类BaseDetector的抽象方法
# 输出经过backbone + neck得到输出
x = self.backbone(img)
if self.with_neck:
x = self.neck(x)
return x
# 该方法是基类BaseDetector的抽象方法
def forward_train(self, img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None):
# 根据输入img经由backbone + neck得到输出x
x = self.extract_feat(img)
# 调用基类BaseDetector的forward_train方法计算损失
losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore)
return losses # 返回
def simple_test(self, img, img_metas, rescale=False): # 该方法是基类BaseDetector的抽象方法
# 经由backbone + neck得到输出
x = self.extract_feat(img)
# 经由head得到输出
outs = self.bbox_head(x)
# 将head的输出转换成具体的边界框列表
bbox_list = self.bbox_head.get_bboxes(*outs, img_metas, rescale=rescale)
# 导出为ONNX时跳过后处理阶段
if torch.onnx.is_in_onnx_export():
return bbox_list
# 解析边界框列表
bbox_results = [bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
for det_bboxes, det_labels in bbox_list]
# 返回
return bbox_results
def aug_test(self, imgs, img_metas, rescale=False): # 该方法是基类BaseDetector的抽象方法
assert hasattr(self.bbox_head, 'aug_test'), '{self.bbox_head.__class__.__name__}' \
' does not support test-time augmentation'
# 输入一组图像获得输出特征
feats = self.extract_feats(imgs)
# 调用基类BaseDetector的aug_test方法
return [self.bbox_head.aug_test(feats, img_metas, rescale=rescale)]
最后是FCOS类:
@DETECTORS.register_module()
class FCOS(SingleStageDetector):
def __init__(self, backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None):
super(FCOS, self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, pretrained)
链接, F C O S {\rm FCOS} FCOS是一种无框检测器,该类继承自AnchorFreeHead类,其有两个基类BaseDenseHead和BBoxTestMixin。先来看两个基类的内容:
class BaseDenseHead(nn.Module, metaclass=ABCMeta):
def __init__(self):
super(BaseDenseHead, self).__init__()
@abstractmethod # 抽象方法,继承该类的子类必须实现该方法
def loss(self, **kwargs):
# 计算损失
pass
@abstractmethod # 抽象方法,继承该类的子类必须实现该方法
def get_bboxes(self, **kwargs):
# 将模型的输出转换成边界框
pass
def forward_train(self,
x, # FPN输出的特征
img_metas, # 图像信息
gt_bboxes, # 真实框
gt_labels=None, # 真实框标签,区分AnchorFree和AnchorBased
gt_bboxes_ignore=None,# 忽略的真实框
proposal_cfg=None, # 建议框参数,区分AnchorFree和AnchorBased
**kwargs):
# 返回损失以及建议区域
outs = self(x)
# 是否有真实框的标签等信息
if gt_labels is None:
loss_inputs = outs + (gt_bboxes, img_metas)
else:
loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
# 计算损失
losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
# 获得建议框
if proposal_cfg is None:
return losses
else:
proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg)
return losses, proposal_list
class BBoxTestMixin(object):
# 合并
def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas):
recovered_bboxes = []
for bboxes, img_info in zip(aug_bboxes, img_metas):
img_shape = img_info[0]['img_shape']
scale_factor = img_info[0]['scale_factor']
flip = img_info[0]['flip']
flip_direction = img_info[0]['flip_direction']
bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip,
flip_direction)
recovered_bboxes.append(bboxes)
bboxes = torch.cat(recovered_bboxes, dim=0)
if aug_scores is None:
return bboxes
else:
scores = torch.cat(aug_scores, dim=0)
return bboxes, scores
def aug_test_bboxes(self, feats, img_metas, rescale=False):
gb_sig = signature(self.get_bboxes)
gb_args = [p.name for p in gb_sig.parameters.values()]
gbs_sig = signature(self._get_bboxes_single)
gbs_args = [p.name for p in gbs_sig.parameters.values()]
assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \
f'{self.__class__.__name__} does not support test-time augmentation'
aug_bboxes = []
aug_scores = []
aug_factors = []
for x, img_meta in zip(feats, img_metas):
outs = self.forward(x)
bbox_inputs = outs + (img_meta, self.test_cfg, False, False)
bbox_outputs = self.get_bboxes(*bbox_inputs)[0]
aug_bboxes.append(bbox_outputs[0])
aug_scores.append(bbox_outputs[1])
if len(bbox_outputs) >= 3:
aug_factors.append(bbox_outputs[2])
merged_bboxes, merged_scores = self.merge_aug_bboxes(aug_bboxes, aug_scores, img_metas)
merged_factors = torch.cat(aug_factors, dim=0) if aug_factors else None
det_bboxes, det_labels = multiclass_nms(
merged_bboxes,
merged_scores,
self.test_cfg.score_thr,
self.test_cfg.nms,
self.test_cfg.max_per_img,
score_factors=merged_factors)
if rescale:
_det_bboxes = det_bboxes
else:
_det_bboxes = det_bboxes.clone()
_det_bboxes[:, :4] *= det_bboxes.new_tensor(img_metas[0][0]['scale_factor'])
bbox_results = bbox2result(_det_bboxes, det_labels, self.num_classes)
return bbox_results
类AnchorFreeHead的主要内容如下:
@HEADS.register_module() # 使用注册器注册HEADS
class AnchorFreeHead(BaseDenseHead, BBoxTestMixin):
def __init__(self,
num_classes, # 类别数
in_channels, # 输入通道数
feat_channels=256, # 特征图通道数
stacked_convs=4, # 堆叠的卷积层数
strides=(4, 8, 16, 32, 64),# 下采样倍数
dcn_on_last_conv=False, # 最后一层是否使用DCN
conv_bias='auto', # 卷积层的偏置参数
loss_cls=dict( # 分类分支损失函数相关参数
type='FocalLoss', # 分类分支使用FocalLoss
use_sigmoid=True, # 是否使用sigmoid
gamma=2.0, # FocalLoss参数1
alpha=0.25, # FocalLoss参数2
loss_weight=1.0), # 分类损失权重
loss_bbox=dict( # 回归分支损失函数相关参数
type='IoULoss', # 回归分支使用IoULoss
loss_weight=1.0), # 回归损失权重
conv_cfg=None,
norm_cfg=None,
train_cfg=None,
test_cfg=None):
super(AnchorFreeHead, self).__init__()
self.num_classes = num_classes
self.cls_out_channels = num_classes # 分支分支输出特征图的通道数等于类别数
self.in_channels = in_channels
self.feat_channels = feat_channels
self.stacked_convs = stacked_convs
self.strides = strides
self.dcn_on_last_conv = dcn_on_last_conv
assert conv_bias == 'auto' or isinstance(conv_bias, bool)
self.conv_bias = conv_bias
self.loss_cls = build_loss(loss_cls) # 根据build函数计算分类损失
self.loss_bbox = build_loss(loss_bbox) # 根据build函数计算回归损失
self.train_cfg = train_cfg
self.test_cfg = test_cfg
self.conv_cfg = conv_cfg
self.norm_cfg = norm_cfg
self.fp16_enabled = False
self._init_layers() # 层初始化
def _init_layers(self):
# 初始化
self._init_cls_convs()
self._init_reg_convs()
self._init_predictor()
def _init_cls_convs(self):
# 建立分类分支
self.cls_convs = nn.ModuleList()
for i in range(self.stacked_convs): # 根据堆叠数展开
chn = self.in_channels if i == 0 else self.feat_channels # 确定输入通道数
if self.dcn_on_last_conv and i == self.stacked_convs - 1: # 最后一层是否替换成DCN
conv_cfg = dict(type='DCNv2')
else:
conv_cfg = self.conv_cfg
self.cls_convs.append(
ConvModule( # 包含conv + norm + activation操作
chn,
self.feat_channels,
3,
stride=1,
padding=1,
conv_cfg=conv_cfg,
norm_cfg=self.norm_cfg,
bias=self.conv_bias))
def _init_reg_convs(self):
# 建立回归分支
self.reg_convs = nn.ModuleList()
for i in range(self.stacked_convs): # 根据堆叠数展开
chn = self.in_channels if i == 0 else self.feat_channels # 确定输入通道数
if self.dcn_on_last_conv and i == self.stacked_convs - 1: # 最后一层是否使用DCN
conv_cfg = dict(type='DCNv2')
else:
conv_cfg = self.conv_cfg
self.reg_convs.append(
ConvModule( # 包含conv + norm + activation操作
chn,
self.feat_channels,
3,
stride=1,
padding=1,
conv_cfg=conv_cfg,
norm_cfg=self.norm_cfg,
bias=self.conv_bias))
def _init_predictor(self):
# 建立预测分支,即分类分支和回归分支的结尾处
self.conv_cls = nn.Conv2d(self.feat_channels, self.cls_out_channels, 3, padding=1)
self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
def init_weights(self):
# 初始化权重
for m in self.cls_convs: # 初始化分类分支权重
if isinstance(m.conv, nn.Conv2d):
normal_init(m.conv, std=0.01)
for m in self.reg_convs: # 初始化分类分支权重
if isinstance(m.conv, nn.Conv2d):
normal_init(m.conv, std=0.01)
bias_cls = bias_init_with_prob(0.01)
normal_init(self.conv_cls, std=0.01, bias=bias_cls)
normal_init(self.conv_reg, std=0.01)
def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
missing_keys, unexpected_keys, error_msgs):
# 更新模型参数,以便能够使用以前版本的checkpoints
version = local_metadata.get('version', None)
if version is None:
# 字典与以前版本的不同,如fcos_cls变成了conv_cls
bbox_head_keys = [k for k in state_dict.keys() if k.startswith(prefix)]
# 定义列表分别存放以前和现在的键值
ori_predictor_keys = []
new_predictor_keys = []
# 如fcos_cls或fcos_reg
for key in bbox_head_keys:
ori_predictor_keys.append(key)
key = key.split('.')
conv_name = None
if key[1].endswith('cls'):
conv_name = 'conv_cls'
elif key[1].endswith('reg'):
conv_name = 'conv_reg'
elif key[1].endswith('centerness'):
conv_name = 'conv_centerness'
else:
assert NotImplementedError
if conv_name is not None:
key[1] = conv_name
new_predictor_keys.append('.'.join(key))
else:
ori_predictor_keys.pop(-1)
# 使用新的内容更新字典
for i in range(len(new_predictor_keys)):
state_dict[new_predictor_keys[i]] = state_dict.pop(ori_predictor_keys[i])
super()._load_from_state_dict(state_dict, prefix, local_metadata,
strict, missing_keys, unexpected_keys,error_msgs)
def forward(self, feats):
# 输入为一个4维张量,返回分类置信度和边界框预测
return multi_apply(self.forward_single, feats)[:2]
def forward_single(self, x):
cls_feat = x
reg_feat = x
# 分类分支预测
for cls_layer in self.cls_convs:
cls_feat = cls_layer(cls_feat)
cls_score = self.conv_cls(cls_feat)
# 回归分支预测
for reg_layer in self.reg_convs:
reg_feat = reg_layer(reg_feat)
bbox_pred = self.conv_reg(reg_feat)
# 返回
return cls_score, bbox_pred, cls_feat, reg_feat
@abstractmethod # 抽象方法
@force_fp32(apply_to=('cls_scores', 'bbox_preds'))
def loss(self, cls_scores, bbox_preds, gt_bboxes, gt_labels,img_metas, gt_bboxes_ignore=None):
# 计算head的损失值,输入包括预测和真实标注
raise NotImplementedError
@abstractmethod # 抽象方法
@force_fp32(apply_to=('cls_scores', 'bbox_preds'))
def get_bboxes(self, cls_scores, bbox_preds, img_metas, cfg=None, rescale=None):
# 将网络的输出转换成边界框的预测
raise NotImplementedError
@abstractmethod # 抽象方法
def get_targets(self, points, gt_bboxes_list, gt_labels_list):
# 计算分类、回归和center-ness的优化目标
raise NotImplementedError
def _get_points_single(self, featmap_size, stride, dtype, device, flatten=False):
# 获得单个特征图上所有点的坐标
h, w = featmap_size
# x_range = Tensor([0,1,...,w-1])
x_range = torch.arange(w, dtype=dtype, device=device)
# y_range = Tensor([0,1,...,h-1])
y_range = torch.arange(h, dtype=dtype, device=device)
# 得到的第一个值为按列展开的结果,第二个值为按行展开的结果,且形状均是(h,w)
y, x = torch.meshgrid(y_range, x_range)
# 是否展开
if flatten:
y = y.flatten()
x = x.flatten()
# 返回
return y, x
def get_points(self, featmap_sizes, dtype, device, flatten=False):
# 同时在多个特征图上获得点的坐标
mlvl_points = []
for i in range(len(featmap_sizes)):
mlvl_points.append(self._get_points_single(featmap_sizes[i], self.strides[i],
dtype, device, flatten))
# 以列表的形式返回
return mlvl_points
def aug_test(self, feats, img_metas, rescale=False):
return self.aug_test_bboxes(feats, img_metas, rescale=rescale)
最后,FCOSHead的关键内容为:
@HEADS.register_module() # 使用注册器注册HEADS
class FCOSHead(AnchorFreeHead):
def __init__(self,
num_classes, # 类别数
in_channels, # 输入通道数
# FCOS在FPN的每一层规定了回归的范围,超过该范围的不在该层回归
regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, INF)),
center_sampling=False, # 使用使用中心采样
center_sample_radius=1.5, # 使用中心采样后中心区域的大小
norm_on_bbox=False, # 回归分支正则化
centerness_on_reg=False, # 是否在回归分支共享center-ness
loss_cls=dict( # 分类分支FocalLoss参数
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='IoULoss', loss_weight=1.0), # 回归分支IoULoss参数
loss_centerness=dict( # center-ness分支CrossEntropyLoss参数
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0),
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True),
**kwargs):
self.regress_ranges = regress_ranges
self.center_sampling = center_sampling
self.center_sample_radius = center_sample_radius
self.norm_on_bbox = norm_on_bbox
self.centerness_on_reg = centerness_on_reg
super().__init__(num_classes, in_channels, loss_cls=loss_cls, loss_bbox=loss_bbox,
norm_cfg=norm_cfg, **kwargs)
self.loss_centerness = build_loss(loss_centerness) # 调用build函数计算center-ness分支的损失
def _init_layers(self):
# 初始化head中的层
super()._init_layers()
self.conv_centerness = nn.Conv2d(self.feat_channels, 1, 3, padding=1)
self.scales = nn.ModuleList([Scale(1.0) for _ in self.strides])
def init_weights(self):
# 初始化head中的权重
super().init_weights()
normal_init(self.conv_centerness, std=0.01)
def forward(self, feats):
# 和基类AnchorFreeHead类似
return multi_apply(self.forward_single, feats, self.scales, self.strides)
def forward_single(self, x, scale, stride):
# 调用父类AnchorFreeHead的forward_single函数得到结果,FCOS多了一个center-ness分支
cls_score, bbox_pred, cls_feat, reg_feat = super().forward_single(x)
# 在分类分支还是回归分支共享center-ness
if self.centerness_on_reg:
centerness = self.conv_centerness(reg_feat)
else:
centerness = self.conv_centerness(cls_feat)
# scale=[1.0,1.0,1.0,1.0,1.0]
bbox_pred = scale(bbox_pred).float()
if self.norm_on_bbox:
bbox_pred = F.relu(bbox_pred)
if not self.training: # 是否处于训练阶段
# 乘以stride将预测框放大到与原图对应的尺寸
bbox_pred *= stride
else:
bbox_pred = bbox_pred.exp()
# 返回分类得分、边界框预测和centerness值
return cls_score, bbox_pred, centerness
上面两个forward函数定义了前向传播的部分,下面来看FCOSHead类中的其他重要函数。第一个是根据模型输出将三个分支的结果转换成损失函数的输入格式,其中也定义两个函数_get_bboxes_single和get_bboxes分别处理特征金字塔的单层和所有层:
@force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses'))
def get_bboxes(self, cls_scores, bbox_preds, centernesses, img_metas, cfg=None,
rescale=False, with_nms=True):
# 将网络的输出解析成边界框信息
assert len(cls_scores) == len(bbox_preds)
# FPN输出层数
num_levels = len(cls_scores)
# 特征图大小
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
# 得到所有特征图上的点坐标
mlvl_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, bbox_preds[0].device)
result_list = []
for img_id in range(len(img_metas)):
# 解析分类得分
cls_score_list = [cls_scores[i][img_id].detach() for i in range(num_levels)]
# 解析边界框信息
bbox_pred_list = [bbox_preds[i][img_id].detach() for i in range(num_levels)]
# 解析centerness值
centerness_pred_list = [centernesses[i][img_id].detach() for i in range(num_levels)]
img_shape = img_metas[img_id]['img_shape']
scale_factor = img_metas[img_id]['scale_factor']
# 调用_get_bboxes_single针对单幅图像生成边界框信息
det_bboxes = self._get_bboxes_single(
cls_score_list, bbox_pred_list, centerness_pred_list,
mlvl_points, img_shape, scale_factor, cfg, rescale, with_nms)
result_list.append(det_bboxes)
# 返回
return result_list
def _get_bboxes_single(self, cls_scores, bbox_preds, centernesses, mlvl_points, img_shape,
scale_factor, cfg, rescale=False, with_nms=True):
cfg = self.test_cfg if cfg is None else cfg
assert len(cls_scores) == len(bbox_preds) == len(mlvl_points)
# 存放多层的结果
mlvl_bboxes = []
mlvl_scores = []
mlvl_centerness = []
# 遍历
for cls_score, bbox_pred, centerness, points in zip(
cls_scores, bbox_preds, centernesses, mlvl_points):
assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
# 转换维度、reshape、sigmoid等操作
scores = cls_score.permute(1, 2, 0).reshape(-1, self.cls_out_channels).sigmoid()
centerness = centerness.permute(1, 2, 0).reshape(-1).sigmoid()
bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4)
nms_pre = cfg.get('nms_pre', -1)
# 如果框的数量大于nms_pre则只取前nms_pre大得分的样本,得分通过score * centerness计算
if nms_pre > 0 and scores.shape[0] > nms_pre:
max_scores, _ = (scores * centerness[:, None]).max(dim=1)
_, topk_inds = max_scores.topk(nms_pre)
points = points[topk_inds, :]
bbox_pred = bbox_pred[topk_inds, :]
scores = scores[topk_inds, :]
centerness = centerness[topk_inds]
# 根据预测结果解析
bboxes = distance2bbox(points, bbox_pred, max_shape=img_shape)
mlvl_bboxes.append(bboxes)
mlvl_scores.append(scores)
mlvl_centerness.append(centerness)
# 拼接结果
mlvl_bboxes = torch.cat(mlvl_bboxes)
# 缩放
if rescale:
mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
# 拼接
mlvl_scores = torch.cat(mlvl_scores)
# 填充
padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
mlvl_centerness = torch.cat(mlvl_centerness)
# NMS
if with_nms:
det_bboxes, det_labels = multiclass_nms(
mlvl_bboxes,
mlvl_scores,
cfg.score_thr,
cfg.nms,
cfg.max_per_img,
score_factors=mlvl_centerness)
return det_bboxes, det_labels
else:
return mlvl_bboxes, mlvl_scores, mlvl_centerness
然后定义函数_get_points_single获得特征金字塔中单幅特征图上的点坐标,结合父类AnchorFreeHead的get_points函数即可获得特征金字塔中所有层的点坐标。
def _get_points_single(self, featmap_size, stride, dtype, device, flatten=False):
# 调用父类AnchorFreeHead_get_points_single函数获得单幅特征图上的点坐标
y, x = super()._get_points_single(featmap_size, stride, dtype, device)
# 论文中的公式(floor(s/2)+xs, floor(s/2)+ys),points即为该点映射回原图所在的位置
points = torch.stack((x.reshape(-1) * stride, y.reshape(-1) * stride), dim=-1) + stride // 2
return points
接着,计算各分支的回归目标,包括分类分支、回归分支和 c e n t e r n e s s {\rm centerness} centerness分支:
def get_targets(self, points, gt_bboxes_list, gt_labels_list):
# 均为特征金字塔层数
assert len(points) == len(self.regress_ranges)
num_levels = len(points)
# 扩展,points=(num_levels,num_points,2) => expanded_regress_ranges=(num_points,2)
# 这里将points的第一维消去,使用点的数目即可标识出该点属于哪一层,然后以第一层为例:
# [[-1,64],[-1,64],...,[-1,64]]=(num_points_of_0_level,2)
expanded_regress_ranges = [
points[i].new_tensor(self.regress_ranges[i])[None].expand_as(points[i])
for i in range(num_levels)
]
# 拼接,concat_regress_ranges=(num_points,2)
concat_regress_ranges = torch.cat(expanded_regress_ranges, dim=0)
# concat_points=(num_levels,num_points,2)
concat_points = torch.cat(points, dim=0)
# 每幅图像/每层的点数量,即遍历第一个变量num_levels
num_points = [center.size(0) for center in points]
# 针对每幅输入图像获得真实信息
labels_list, bbox_targets_list = multi_apply(
self._get_target_single,
gt_bboxes_list,
gt_labels_list,
points=concat_points,
regress_ranges=concat_regress_ranges,
num_points_per_lvl=num_points)
# 切分得到每一层每一幅图像的labels
labels_list = [labels.split(num_points, 0) for labels in labels_list]
# 切分得到每一层每一幅图像的边界框回归目标
bbox_targets_list = [bbox_targets.split(num_points, 0) for bbox_targets in bbox_targets_list]
# 拼接所有层
concat_lvl_labels = []
concat_lvl_bbox_targets = []
# 遍历所有层,并拼接
for i in range(num_levels):
concat_lvl_labels.append(
torch.cat([labels[i] for labels in labels_list]))
bbox_targets = torch.cat(
[bbox_targets[i] for bbox_targets in bbox_targets_list])
# 是否对标准化
if self.norm_on_bbox:
bbox_targets = bbox_targets / self.strides[i]
concat_lvl_bbox_targets.append(bbox_targets)
# 返回
return concat_lvl_labels, concat_lvl_bbox_targets
def _get_target_single(self, gt_bboxes, gt_labels, points, regress_ranges, num_points_per_lvl):
# 针对单幅图像计算分类和回归目标
num_points = points.size(0)
num_gts = gt_labels.size(0)
if num_gts == 0:
return gt_labels.new_full((num_points,), self.num_classes), \
gt_bboxes.new_zeros((num_points, 4))
# 计算真实框的面积,gt_bboxes=(num_gts,4)
areas = (gt_bboxes[:, 2] - gt_bboxes[:, 0]) * (gt_bboxes[:, 3] - gt_bboxes[:, 1])
areas = areas[None].repeat(num_points, 1)
# 扩展regress使其与真实框数量对齐
regress_ranges = regress_ranges[:, None, :].expand(num_points, num_gts, 2)
# [None]相当于扩充一个维度,(1,num_gts,4) => (num_points,num_gts,4)
gt_bboxes = gt_bboxes[None].expand(num_points, num_gts, 4)
# points=(num_points,2),2分别表示x和y坐标
xs, ys = points[:, 0], points[:, 1]
# xs=(num_points,) => (num_points,num_gts)
xs = xs[:, None].expand(num_points, num_gts)
ys = ys[:, None].expand(num_points, num_gts)
# 获得各点到真实框各边界的距离,即论文中所说的回归目标,left/right/top/bottom=(num_points,num_gts,1)
left = xs - gt_bboxes[..., 0]
right = gt_bboxes[..., 2] - xs
top = ys - gt_bboxes[..., 1]
bottom = gt_bboxes[..., 3] - ys
# 堆叠,box_targets=(num_points,num_gts,4)
bbox_targets = torch.stack((left, top, right, bottom), -1)
# 中心采样表示仅将落在真实框中心区域某范围内的点作为正样本
if self.center_sampling:
# 定义中心区域的半径
radius = self.center_sample_radius
# 真实框的中心点坐标,center_xs=(num_points,num_gts,1)
center_xs = (gt_bboxes[..., 0] + gt_bboxes[..., 2]) / 2
center_ys = (gt_bboxes[..., 1] + gt_bboxes[..., 3]) / 2
center_gts = torch.zeros_like(gt_bboxes)
# stride.shape=(num_points,num_gts,1)
stride = center_xs.new_zeros(center_xs.shape)
# 遍历所有特征金字塔特征图上的所有点
lvl_begin = 0
for lvl_idx, num_points_lvl in enumerate(num_points_per_lvl):
# 特征金字塔下一层的点的索引
lvl_end = lvl_begin + num_points_lvl
# 由于FPN每一层的特征图大小不同,所以stride在对应层的值也不同
# self.strides=[8,16,32,64,128],self.strides[lvl_idx]*radius=[12,24,48,96,192]
# 第一层点的stride对应12,第二层的stride对应24,...,第五层的stride对应192
# stride=(num_points,num_gts,1)
stride[lvl_begin:lvl_end] = self.strides[lvl_idx] * radius
# 处理下一层
lvl_begin = lvl_end
# center_xs.shape=(num_points,num_gts,1),stride=(num_points,num_gts,1)
# center_xs各真实框的中心点横坐标,stride表示各层特征图上的点对应的值[12,24,48,96,192]
# 下面式子对应每一层的运算,如第一层的真实框中心点坐标减去第一层对应的stride为12,其他层类似
# 以第一层为例说明下面式子的含义
x_mins = center_xs - stride # 中心点位置向左移动12个像素后的位置
y_mins = center_ys - stride # 中心点位置向上移动12个像素后的位置
x_maxs = center_xs + stride # 中心点位置向右移动12个像素后的位置
y_maxs = center_ys + stride # 中心点位置向下移动12个像素后的位置
# 定义回归的有效中心区域,torch.where的第一个参数表示条件,如果满足条件则返回第二个参数的内容,
# 否则返回第三个参数的内容。
# gt_bboxes(x_min,y_min,x_max,y_max),以下面第一个式子为例说明,如果真实框的中心点向左移动了12个像素
# 后的位置大于真实框的左边界,即没有位于真实框外边,则取该位置;否则取真实框的左边界位置。其他式子含义类似
center_gts[..., 0] = torch.where(x_mins > gt_bboxes[..., 0], x_mins, gt_bboxes[..., 0])
center_gts[..., 1] = torch.where(y_mins > gt_bboxes[..., 1], y_mins, gt_bboxes[..., 1])
center_gts[..., 2] = torch.where(x_maxs > gt_bboxes[..., 2], gt_bboxes[..., 2], x_maxs)
center_gts[..., 3] = torch.where(y_maxs > gt_bboxes[..., 3], gt_bboxes[..., 3], y_maxs)
# 该点距离中心区域各边界的距离
cb_dist_left = xs - center_gts[..., 0]
cb_dist_right = center_gts[..., 2] - xs
cb_dist_top = ys - center_gts[..., 1]
cb_dist_bottom = center_gts[..., 3] - ys
# center_bbox=(num_points,num_gts,4)
center_bbox = torch.stack((cb_dist_left, cb_dist_top, cb_dist_right, cb_dist_bottom), -1)
# 使用中心采样,落入真实框中心区域的看作正样本,即最短距离都大于零的话,那么该点的映射位置肯定位于中心区域
inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0
else:
# 不使用中心采样,落入真实框内即看作正样本,同上
inside_gt_bbox_mask = bbox_targets.min(-1)[0] > 0
# 限制每个位置的回归目标范围
max_regress_distance = bbox_targets.max(-1)[0]
# regress_ranges=((-1, 64), (64, 128), (128, 256), (256, 512), (512, INF))
inside_regress_range = ((max_regress_distance >= regress_ranges[..., 0])
& (max_regress_distance <= regress_ranges[..., 1]))
# 如果该位置映射后的位置对应多个目标,取较小的那个作为回归目标,这里首先将无效的位置置为INF方便后续筛选
areas[inside_gt_bbox_mask == 0] = INF
areas[inside_regress_range == 0] = INF
# 获得最小面积以及对应的索引
min_area, min_area_inds = areas.min(dim=1)
# 取对应id的真实框标签
labels = gt_labels[min_area_inds]
# 设置为背景
labels[min_area == INF] = self.num_classes
# 取对应id的回归目标
bbox_targets = bbox_targets[range(num_points), min_area_inds]
# 返回对应的分类回归目标和边界框回归目标
return labels, bbox_targets
def centerness_target(self, pos_bbox_targets):
# 仅计算正样本位置的centerness
left_right = pos_bbox_targets[:, [0, 2]]
top_bottom = pos_bbox_targets[:, [1, 3]]
# centerness的计算公式
centerness_targets = (
left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * (
top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])
# 开平方根并返回
return torch.sqrt(centerness_targets)
最后重要的函数是loss函数的实现,其实现的功能是将所有涉及损失函数计算的变量转换成满足损失函数输入的格式:
@force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses'))
def loss(self, cls_scores, bbox_preds, centernesses, gt_bboxes, gt_labels, img_metas,
gt_bboxes_ignore=None):
# 计算各分支的损失
# cls_scores=(N,num_points*num_classes,H,W)
# bbox_preds=(N,num_points*4,H,W)
# centernesses=(N,num_points*1,H,W)
# gt_bboxes=(num_gts,4)
# gt_labels=(num_gts,1)
assert len(cls_scores) == len(bbox_preds) == len(centernesses)
# 得到FPN各层特征图的大小,即获得h和w的值
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
# 得到特征图上的所有点,all_level_points=(num_levels,num_points,2)
all_level_points = self.get_points(featmap_sizes, bbox_preds[0].dtype, bbox_preds[0].device)
# 得到回归目标,labels=(num_levels,num_points,1),bbox_targets=(num_levels,num_points,4)
labels, bbox_targets = self.get_targets(all_level_points, gt_bboxes, gt_labels)
# 特征图数,即Batch_Size
num_imgs = cls_scores[0].size(0)
# 展开,使用permute函数重排张量
# cls_score=(N,C*NP,H,W) => (N,H,W,C*NP) => (NHWNP, C),其他参数变量类似
flatten_cls_scores = [
cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels)
for cls_score in cls_scores
]
flatten_bbox_preds = [
bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
for bbox_pred in bbox_preds
]
flatten_centerness = [
centerness.permute(0, 2, 3, 1).reshape(-1)
for centerness in centernesses
]
# 拼接函数
flatten_cls_scores = torch.cat(flatten_cls_scores)
flatten_bbox_preds = torch.cat(flatten_bbox_preds)
flatten_centerness = torch.cat(flatten_centerness)
flatten_labels = torch.cat(labels)
flatten_bbox_targets = torch.cat(bbox_targets)
# 将点坐标repeat以便与边界框相对应
flatten_points = torch.cat([points.repeat(num_imgs, 1) for points in all_level_points])
# 前景类别id: [0,num_classes-1],背景类别id: num_classes
bg_class_ind = self.num_classes
# 正样本id,即前景类别id
pos_inds = ((flatten_labels >= 0) & (flatten_labels < bg_class_ind)).nonzero().reshape(-1)
# 正样本数量
num_pos = len(pos_inds)
# 计算分类损失
loss_cls = self.loss_cls(flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs)
# 根据正样本id确定需要处理的边界框以及centerness
pos_bbox_preds = flatten_bbox_preds[pos_inds]
pos_centerness = flatten_centerness[pos_inds]
# 正样本数量大于零
if num_pos > 0:
# 正样本边界框的回归目标
pos_bbox_targets = flatten_bbox_targets[pos_inds]
# 正样本center-ness的回归目标
pos_centerness_targets = self.centerness_target(pos_bbox_targets)
# 正样本位置,在FCOS中将位置看作样本
pos_points = flatten_points[pos_inds]
# 解码,将预测内容解码成实际的边界框以计算IoULoss
pos_decoded_bbox_preds = distance2bbox(pos_points, pos_bbox_preds)
pos_decoded_target_preds = distance2bbox(pos_points, pos_bbox_targets)
# 计算回归损失,IoULoss
loss_bbox = self.loss_bbox(
pos_decoded_bbox_preds,
pos_decoded_target_preds,
weight=pos_centerness_targets,
avg_factor=pos_centerness_targets.sum())
# 计算centerness分支损失,CrossEntropyLoss
loss_centerness = self.loss_centerness(pos_centerness, pos_centerness_targets)
else:
loss_bbox = pos_bbox_preds.sum()
loss_centerness = pos_centerness.sum()
# 以字典形式返回各部分损失
return dict(loss_cls=loss_cls, loss_bbox=loss_bbox, loss_centerness=loss_centerness)
本文介绍的最重要的两个类的成员函数是loss和get_targets。在loss中主要实现的功能是将模型得到的输出以及标注的真实内容解码以满足损失函数的输入格式,在get_targets中主要实现的功能是对每个正样本找到合适的回归目标。
https://github.com/open-mmlab/mmdetection.
完 |
Esto no es una ley de la naturaleza, sino una regla empírica creada por una amplia experiencia. También se conoce como la regla 80/20, y solo es una aproximación aproximada.
Bucles, ramas y otros controles de flujo.
Cada lugar que tenga un if, tendrá una rama que se toma con más frecuencia que la otra rama. Por lo tanto, se pasa más tiempo de ejecución ejecutando esa parte del programa, y no la otra parte.
Cada lugar que tiene un bucle que se ejecuta más de una vez, tiene un código que se ejecuta más que el código circundante. Así se gasta más tiempo allí.
Como ejemplo, considere:
def DoSomeWork():
for i in range(1000000):
DoWork(i)
except WorkExeption:
print("Oh No!")
Aquí, print("Oh No!") solo se ejecutará un máximo de una vez y, a menudo nunca, mientras que DoWork(i) se producirá aproximadamente un millón de veces. |
最近在做一个电商网站。今天想要实现一下购物车的功能。
考虑问题如下:用户访问购物车会比较频繁,而且经常更改(比如修改数字)。对于后端的数据来说,也就是读写都很频繁。于是考虑通过redis,来减少对数据库的读写。
就研究一下怎么使用redis以及整合到springboot中。
cart数据库的设计如下:
DROP TABLE IF EXISTS `cart`;
CREATE TABLE `cart` (
`uname` varchar(30) NOT NULL,
`pid` int(30) NOT NULL,
`num` int(10) DEFAULT NULL,
PRIMARY KEY (`uname`,`pid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
其中pid是商品的唯一标识。uname是用户的唯一标识。
需要考虑哪几个问题
由于购物车是商品的列表,首先要考虑怎么存列表最好。
购物车的uname怎么体现——是谁的购物车
何时读取数据库、何时更新数据库、何时删除缓存
数据库里是一条条记录、在java中是一个个object、前端需要json。何时做这个转换以及redis中单个记录怎么存储(json,还是格式化的字符串)
先留下redis的文档地址redis文档
在centos7.4安装配置redis4.0.9
下载、解压、安装gcc、编译
wget http://download.redis.io/releases/redis-4.0.9.tar.gz
tar xzf redis-4.0.9.tar.gz
cd redis-4.0.9
yum install gcc
make MALLOC=libc
配置PATH
vim /etc/profile.d/custom.sh
## 在custome.sh中输入
export PATH=$PATH:/root/redis-4.0.9/src
PATH生效之后(注意要生效,小白别说我坑,可以搜一下这个),输入redis-server会有如下控制台输出:
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 4.0.9 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 13984
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
设置服务与开机自动启动
修改 redis目录下的redis.conf 如下部分。将daemonize no设置为daemonize yes
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
mkdir /etc/redis
# 复制redis.conf 到 /etc/redis/6379.conf
cp /root/redis-4.0.9/redis.conf /etc/redis/6379.conf
#将redis的启动脚本复制到/etc/init.d中
cp /root/redis-4.0.9/utils/redis_init_script /etc/init.d/redisd
# 执行自启动命令
chkconfig redisd on
在执行chkconfig redisd on时报错service redisd does not support chkconfig。解决这个:
vim /etc/init.d/redisd
在第一行中加入如下注释:
#!/bin/sh
# chkconfig: 2345 90 10
# description: Redis is a persistent key-value database
这两行的意思时,在运行级别为2、3、4、5时,自动启动redis。启动优先级90、关闭优先级10
再次执行chkconfig redisd on,就不报错了。
下面以服务的方式启动一下redis。
service redisd start
报错:
Starting Redis server...
/etc/init.d/redisd: line 21: /usr/local/bin/redis-server: No such file or directory
跟据报错信息,创建server、cli的软连接(redis-cli的软连接也是需要的)
ln -s /root/redis-4.0.9/src/redis-server /usr/local/bin/redis-server
ln -s /root/redis-4.0.9/src/redis-cli /usr/local/bin/redis-cli
再次执行service redisd start,使用ps -aux | grep redis可以看到redis进程的信息,说明配置成功。
记下没有遇到的坑
如果“某天”出现/var/redis/run/redis_6379.pid exists, process is already running or crashed的问题,说明机器有过异常断电或者崩溃。
科学的处理办法2种
可用安装文件启动
redis-server /etc/redis/6379.conf
shutdown -r now软重启让系统自动恢复下就行了
允许远程连接
/etc/redis/6379.conf中
1.将bind 127.0.0.1注释掉即可
2.设置密码 requirepass ....
3.(如果不设置密码需要)将protected-mode yes改为no
ps:网上说,他的机器没有设置密码,被挖矿了,吓得我设置了密码
解决设置密码之后的问题
设置密码之后使用service redisd stop会报这样
Stopping ...
(error) NOAUTH Authentication required.
Waiting for Redis to shutdown ...
Waiting for Redis to shutdown ...
Waiting for Redis to shutdown ...
## 这样并不能正确关闭redis
## 只能ps -aux |grep redis 找到pid 然后手动kill
这是因为service redisd stop使用的其实是redis-cli -p 6379 shutdown。当设置了密码之后,就需要redis-cli -a "你的密码" -p 6379 shutdown才能正常关闭服务。所以,在/etc/init.d/redisd中将stop下面的$CLIEXEC -p $REDISPORT shutdown改为$CLIEXEC -a "你的密码" -p $REDISPORT shutdown
redis的安装和配置,基本到这里就结束了。
springboot配置redis和使用redisTemplate
首先要说,springboot使用redis有两种方式:
用spring的cache抽象,就是给方法加上@EnableCache等等注解的方式
用redisTemplate。这种就类似于使用redis-cli终端一样,自己去设置key-value。
application.properties和pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
上面的依赖貌似自动导入了jedis和一个连接池,不是很清楚
# redis
# Redis数据库索引(默认为0)
spring.redis.database=0
# Redis服务器地址
spring.redis.host=hostname/ip
# Redis服务器连接端口
spring.redis.port=6379
# Redis服务器连接密码(默认为空)
spring.redis.password=passwdIfAny
# 连接池最大连接数(使用负值表示没有限制)
spring.redis.jedis.pool.max-active=8
# 连接池最大阻塞等待时间(使用负值表示没有限制)
spring.redis.jedis.pool.max-wait=-1
# 连接池中的最大空闲连接
spring.redis.jedis.pool.max-idle=8
# 连接池中的最小空闲连接
spring.redis.jedis.pool.min-idle=0
# 连接超时时间(毫秒)
spring.redis.timeout=5000 ##如果非本机,这个不能为0,否则会报timeout
使用redisTemplate
因为springboot的自动配置,很开心,有了这些配置,就可以直接使用redisTemplate了。
测试代码:
@RunWith(SpringRunner.class)
@SpringBootTest
public class EmarketApplicationTests {
@Autowired
private RedisTemplate redisTemplate;
@Test
public void set() throws InterruptedException {
ValueOperations value=redisTemplate.opsForValue();
value.set("名字","刘港欢");
for (int i = 0; i <20 ; i++) {
System.out.println(value.get("test"));
Thread.sleep(1000);
}
}
}
测试结果表明能正常运行,而且中文也没有问题。
首先自动导入RedisTemplate依赖。
然后使用redisTemplate.opsFor...();就能得到支持的操作,剩下的看代码就能理解。
上面的opsfor...,for后面的部分就是redis支持的数据类型,这个以后可以写一篇博客。。。
redis的Serializer带来的小问题
代码的value.get能正常运行,但是在redis-cli运行get 你设置key就显示nil(null)。原因是使用了JdkSerializationRedisSerializer,将对象的类型信息也加入了key。
于是真正的key为
\xac\xed\x00\x05t\x00\x06\xe5\x90\x8d\xe5\xad\x97test
要是强迫症的话,需要自己配置redisTemplate这个bean,具体说就是要调用redis.setKeySerializer(..);这种。
当然我也只是遇到了这个问题,而没有真正解决,如果真的因为这个出bug,我再找解决方案吧
最后怎么用redis的
用了很暴力的办法。抛弃了数据库,购物车完全用redis的String 保存,值直接就是前端需要用的json,格式还挺复杂的。问题就是不会持久化购物车信息。设置超时5分钟,redis中的记录就删除,很暴力了。。。 |
Je peux exécuter les tests unitaires de mon complément OpenERP v7 comme décrit ici .
En PyCharm, je l’ai fait en ajoutant une configuration Python dans Run / Debug Configuration comme suit:
Scénario:
/home/juliocesar/work/projects/my_project/openerp-server
Paramètres de script:
--addons-path=openerp/addons,openerp/addons/my_addons --log-level=test --database=my_project_db_test --db_host=localhost --db_user=test --db_password=123 --init=my_addon --test-enable --stop-after-init
Il s’exécute correctement mais affiche une sortie standard au format journal de texte comme suit:
2015-04-24 13:47:55,101 12340 TEST my_project openerp.modules.module: module my_addon: executing 1 `fast_suite` and/or `checks` sub-modules 2015-04-24 13:47:55,101 12340 TEST my_project openerp.modules.module: test_change_old_received_to_contingency (openerp.addons.my_addon.tests.test_my_addon.TestMyItems) 2015-04-24 13:47:55,101 12340 TEST my_project openerp.modules.module: ` Test patch to change old received status to contingency. 2015-04-24 13:47:55,110 12340 TEST my_project openerp.modules.module: Ran 1 tests in 0.006s 2015-04-24 13:47:55,110 12340 TEST my_project openerp.modules.module: OK
où il montre les résultats de l’exécution du test suivant que j’ai créé dans add-on my_addon du projet my_project , dans /home/juliocesar/work/projects/my_project/openerp/addons/my_addon/tests/test_my_addon.py :
from openerp.tests.common import TransactionCase import unittest2 class TestMyItems(TransactionCase): def test_change_old_received_to_contingency(self): """Test patch to change old received status to contingency.""" self.assertTrue(True) if __name__ == '__main__': unittest2.main()
Ce que je veux, c’est utiliser les tests Python -> Configuration unittest pour afficher les résultats des tests avec des icons rouges / vertes et une interface PyCharm pour les résultats des tests.
La configuration unittest requirejs le fichier script où se trouvent les tests, si je spécifie le fichier, PyCharm trouve tous les tests dans le fichier mais donne des erreurs car la firebase database (et d’autres parameters comme le script openerp-server et les parameters restants spécifiés ci-dessus pour exécuter les tests OpenERP) ) n’est pas configuré:
Ceci est le résultat de l’exécution de cette configuration:
/usr/bin/python2.7 /home/juliocesar/apps/pycharm/helpers/pycharm/utrunner.py /home/juliocesar/work/projects/my_project/openerp/addons/my_addon/tests/ false Testing started at 09:38 AM ... No handlers could be found for logger "openerp.sql_db" Process finished with exit code 0 Error Traceback (most recent call last): File "/home/juliocesar/work/projects/my_project/openerp/tests/common.py", line 94, in setUp TransactionCase.cr = self.cursor() File "/home/juliocesar/work/projects/my_project/openerp/tests/common.py", line 55, in cursor return openerp.modules.registry.RegistryManager.get(DB).db.cursor() File "/home/juliocesar/work/projects/my_project/openerp/modules/registry.py", line 193, in get update_module) File "/home/juliocesar/work/projects/my_project/openerp/modules/registry.py", line 209, in new registry = Registry(db_name) File "/home/juliocesar/work/projects/my_project/openerp/modules/registry.py", line 76, in __init__ cr = self.db.cursor() File "/home/juliocesar/work/projects/my_project/openerp/sql_db.py", line 484, in cursor return Cursor(self._pool, self.dbname, serialized=serialized) File "/home/juliocesar/work/projects/my_project/openerp/sql_db.py", line 182, in __init__ self._cnx = pool.borrow(dsn(dbname)) File "/home/juliocesar/work/projects/my_project/openerp/sql_db.py", line 377, in _locked return fun(self, *args, **kwargs) File "/home/juliocesar/work/projects/my_project/openerp/sql_db.py", line 440, in borrow result = psycopg2.connect(dsn=dsn, connection_factory=PsycoConnection) File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) OperationalError: FATAL: database "False" does not exist
Alors, comment spécifier les parameters requirejs pour exécuter OpenERP v7 unittest avec une configuration de test PyCharm?
J’ai utilisé PyCharm 4.0.6 Build # PY-139.1659, mais cela ne fonctionne pas non plus avec PyCharm 5.
Dans la fenêtre run / debug, avez-vous défini la valeur du champ de répertoire de travail actuel sur /home/juliocesar/work/projects/my_project ? Cela aiderait PyCharm à rechercher les chemins relatifs ainsi que les importations.
Vous pouvez également essayer de donner un chemin complet à vos addons dans la liste des arguments.
vous pouvez déboguer en utilisant cmd dans Windows entrez avec cmd dans le dossier odoo.exe C: \ Program Files (x86) \ Odoo 8.0-20150719 \ server et exécutez cette commande
odoo --log-level=debug
ou terminal dans linux entrez avec le terminal dans le fichier odoo.py (/ usr / bin /) et exécutez cette commande
python odoo.py --log-level=debug
tape ctrl + z ou ctrl + c pour désactiver un journal.
vous trouverez un fichier (openerp-server.log) dans / var / log / odoo / |
Vamos a ejecutar un sencillo programa Python en Hadoop Map Reduce. El programa va a calcular la temperatura máxima de cada año a partir de un registro histórico. Para el ejemplo usaremos CentOS aunque es válido para cualquier otra distribución de Linux.
Si no tienes aún instalado Hadoop quizás te interese el siguiente post:Instalación paso a paso de Hadoop en Linux y un ejemplo de uso.
En primer lugar crearemos una carpeta tempMax en el escritorio que usaremos como directorio de trabajo:
Nos ubicamos desde la terminal dentro de esa carpeta:
cd Escritoriocd tempMax
Creamos el archivo de Python donde vamos a programar nuestro código mapper:
touch mapperMaxTemp.py
Antes de escribir el código mapper, debemos tener en cuenta que nuestros datos estarán representados de la siguiente forma:
Es decir, que cada fila tendrá representado el año, el mes y la temperatura con espacios tabulados (datos ficticios, generados con una función random). Así que cada subproblema será un año. Tendremos que emitir pares clave-valor dónde la clave sea el año y el valor la temperatura.
Una vez creado el archivo mapperMaxTemp.py, accedemos a él desde el escritorio con doble click y escribimos el código mapper:
#!/usr/bin/python
import sys
"""
Mapper de MaxTemp
Obtenido de http://exponentis.es/
"""
# Por cada medida calculamos los pares <anyo, temp>
for linea in sys.stdin:
linea = linea.strip()
anyo, mes, temp = linea.split("\t", 2)
print("%s\t%s" % (anyo, temp))
El código, para cada línea de datos de entrada, en primer lugar elimina espacios en blanco (por delante y por detrás) con el método strip() y posteriormente extrae el año, mes y temperatura de cada fila, “rompiendo” la entrada por cada tabulación (/t) que haya. Por último emitimos con el print la clave-valor separados por una tabulación.
Guardamos el archivo y volviendo a la terminal de Linux, nos damos permisos para poder ejecutar el mapper.
chmod u+x mapperMaxTemp.py
Ahora vamos a crear la funcionalidad reducer:
touch reducerMaxTemp.py
Abrimos el archivo recién creado reducerMaxTemp.py con doble click en su carpeta del escritorio. Ahora tenemos que escribir un código que calcule el máximo de las temperaturas recibidas:
#!/usr/bin/python
import sys
"""
Reducer de MaxTemp
Obtenido de http://exponentis.es/
"""
subproblema = None
tempMaxima = None
for claveValor in sys.stdin:
anyo, temp = claveValor.split("\t", 1)
#convertimos la temp a float
temp = float(temp)
#El primer subproblema es el primer anyo de reducer (y la temp máxima de momento también)
if subproblema == None:
subproblema = anyo
tempMaxima = temp
#si el anyo es del subproblema actual, comprobamos si es la temperatura maxima
if subproblema == anyo:
if temp > tempMaxima:
tempMaxima = temp
else: #si ya acabamos con el subproblema, emitimos
print("%s\t%s" % (subproblema, tempMaxima))
#Pasamos al siguiente subproblema (de momento la temp es la máxima)
subproblema = anyo
tempMaxima = temp
#el anterior bucle no emite el último subproblema
print("%s\t%s" % (subproblema, tempMaxima))
El programa crear muchos pares clave-valor y tenemos que indicar cuando termina cada par. Separamos año y temperatura actuales. En el primer subproblema consideramos máxima la primera temperatura y la comparamos con la del siguiente subproblema, comparando la temperatura actual con la temperatura máxima almacenada. Si es superior la temperatura actual se actualiza la temperatura almacenada.
Esto se realiza para todas las temperaturas hasta que cambia la fecha del año, en el que cambiamos de subproblema, emitiendo la solución del anterior subproblema (año y temperatura máxima).
El último print emite la solución del último subproblema.
Al igual que con el mapper, debemos darnos permisos de ejecución sobre el reducer a través de la consola de Linux:
chmod u+x reducerMaxTemp.py
Ahora descargaremos el fichero medidas.txt en nuestra carpeta de trabajo. Este fichero contiene 730 registros diarios de temperaturas a lo largo de 2017 y 2018, y tiene el formato que vimos anteriormente con el año, mes y temperatura con espacios tabulados. Nota: son datos inventados, con una temperatura máxima diaria establecida con una función random entre -5 ºC y 48 ºC.
Descarga el archivo medidas.txt de aquí: https://mega.nz/#!Pnpw3aYK
Vamos a ejecutar el mapper desde la consola Linux para comprobar que está bien codificado. Nota: la barra vertical es escribe con “Alt Gr + 1”.
cat medidas.txt | ./mapperMaxTemp.py
El resultado debe ser mostrar en pantalla todas las clave-valor que emite mapper, es decir, todas las temperaturas de cada año sin el mes:
Ahora ejecutamos el Map y el Reduce a la vez sin ejecutar Hadoop a modo de prueba:
cat medidas.txt | ./mapperMaxTemp.py | sort -k1,1 | ./reducerMaxTemp.py
El resultado es el máximo de cada año, que coinciden de forma lógica por haberse usado la misma función random:
Viendo que ya funciona bien, podemos ejecutarlo en Hadoop de la siguiente forma:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp1
Con este comando indicamos a Hadoop cuáles son nuestros archivos mapper y reducer y que además tendrá que distribuir por los distintos servidores (por eso salen indicados dos veces). Le indicamos además cuáles son los datos –medidas.txt- y la salida que queremos.
Se nos habrá creado una nueva carpeta en nuestro directorio llamada miSalidaMexTemp1 que contendrá un archivo llamado part-00000 con el resultado del análisis:
Con esto ya habríamos terminado. Pero supongamos que ahora queremos obtener también el máximo de temperatura por cada mes.
Modificamos el mapper añadiendo el mes:
Hemos separado año y mes con un guión y posteriormente la temperatura con una tabulación.
El comando Hadoop seria muy similar al anterior pero debemos indicarle, además de lo anterior, que nuestro mapper va con una clave compuesta (dos valores) y que además van separados por un guión. Debemos además especificar una salida diferente parque Hadoop no sobrescribe la anterior y da un error:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -Dstream.num.map.key.fields=2 -Dmap.output.key.field.separator="-" -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp2
El resultado en el nuevo archivo part-00000 es la máxima temperatura por mes:
Nota: si quisiéramos añadir un combiner, podría hacerse de la siguiente forma:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -Dstream.num.map.key.fields=2 -Dmap.output.key.field.separator="-" -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -combiner ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp2
|
Authenticating users with Repl.it Auth
This tutorial will teach you how to use the Repl.it Auth API.
Prerequisites
You are required to know the following before you start:
Basic knowledge of Python/Flask
Basic knowledge of Jinja2 (Flask templating)
Basic knowledge of HTML
Starting off
We'll start off with a basic Flask template (main.py)
from flask import Flask, render_template, request
app = Flask('app')
@app.route('/')
def hello_world():
return render_template('index.html')
app.run(host='0.0.0.0', port=8080)
(/templates/index.html)
<!doctype html><html><head> <title>Repl Auth</title></head><body> Hello!</body></html>
Nothing interesting yet.
The authentication script
Now, we'll add the authentication script.
<div>
<script authed="location.reload()" src="https://auth.turbio.repl.co/script.js"></script>
</div>
This can be placed anywhere in the document body and will create an iframe in its parent element. Additionally, any JavaScript placed in the authed attribute will be executed when the person finishes authenticating, so the current one will just reload when the user authenticates.
If you run it now, you will notice a big Let (your site url) know who you are? with a small version of your profile and an Authorize button.
You can click the button but nothing will happen.
The headers
Now, let's make something happen.
Go back to your main.py file; we will be grabbing the Repl.it specific headers for the request and extracting data from them.
The main ones we care about are: X-Replit-User-Id, X-Replit-User-Name, and X-Replit-User-Roles. The username one will probably be the most useful for now.
With this information, we can let our HTML template be aware of them.
(main.py)
@app.route('/')
def hello_world():
return render_template(
'index.html',
user_id=request.headers['X-Replit-User-Id'],
user_name=request.headers['X-Replit-User-Name'],
user_roles=request.headers['X-Replit-User-Roles']
)
(templates/index.html)
<body>
{% if user_id %}
<h1>Hello, {{ user_name }}!</h1>
<p>Your user id is {{ user_id }}.</p>
{% else %}
Hello! Please log in.
<div>
<script authed="location.reload()" src="https://auth.turbio.repl.co/script.js"></script>
</div>
{% endif %}
</body>
Success!
Now, run your code. It should display a big Hello, (your username)! along with your user ID.
If you want to port this to other languages or frameworks like NodeJS + Express, just be aware of how you can get specific request headers.
Warning
Also, be aware that if you're going to be using an accounts system, PLEASE do all the specific logic for checking users on the BACKEND, that means NOT doing it with JavaScript in your HTML. That is all.
Please upvote my post if you found it helpful :)
If you want it, here is the source code for the basic Repl Auth script demonstrated in this tutorial https://repl.it/@mat1/repl-auth-example. |
def setUp(self):
self.sm = StackMachine()
In this assignment you’ll have to implement a simple stack machine (non-real-world) that can perform basic arithmetic operations.The implementation has to be in Python using the template provided in ./robolab-template/src/stack_machine.py.Again, you will implement and run the program on your computer.
Read about the specification of our stack machine here: Specification.
Deadline for submission: Sunday, December 13th 2020, 23:59 // 11:59 pmPlease upload your solution into your own Gitlab repository using the prepared files.
Implement the function top() in
./src/stack_machine.py:
Implement or define a LIFO stack that holds unsigned 8-bit integers and characters.
Add the missing logic to the function top() returning the top element if there is any, or None.
Take into account that the items on the stack probably needs to be converted into a tuple beforehand.
Implement the function do() in
./src/stack_machine.py:
Define a representation for stack machine instructions and characters, e.g. using Enums classes.
Add the missing logic for processing a 6-bit word (a Tuple).
Handle the input parameter according to the specification.
If the word is an operand or a character, push it to the stack.
If the word is an instruction or a string operation (e.g. SPEAK), pop the operands needed from the stack and execute the instruction.
Don’t forget to push the result back to the stack.
Check for an overflow if required and set the overflow flag according to our specification.
Stop the execution if there are not enough operands or there was an illegal instruction, e.g. division by 0.
Implement all instructions from our specification.
You can move the logic for the instructions into another method, e.g. execute, in order to keep your code clean and readable.
Now we are looking into unit-tests again and update the file ./src/test.py.
First, import everything from stack_machine.py.
Implement the instance test case for StackMachine as you did for HammingCode.
Create a test case prototype for the function do() and add a check for the final result on the stack.
Use asserts and pre-defined expectations (e.g. simple variable holding the value) for the check.
Write a test case for the function top(). Make sure that the value returned matches our definition (8-bit tuple).
In your file ./src/test.py, we will implement the prototyped test case for do().
Implement and execute the following sequence (taken from assignment 3 and extended):
\[\begin{matrix}001010 \\010001 \\010001 \\010110 \\011111 \\000100 \\011011 \\000100 \\011001 \\000110 \\011000 \\100010 \\110110 \\101000 \\110101 \\000101 \\100001 \\010000\end{matrix}\]
For the SPEAK instruction, we only print the output instead of using TTS in this assignment.
What happens if there was a division by 0 or if there were not enough operands on the stack?
Cover the correct error handling also with unit-tests.
Provide a test case for every instruction listed in our specification.
To simplify your test cases you can outsource the object creation into a setUp function:
def setUp(self):
self.sm = StackMachine()
Do not exchange source code with members of other groups! Keep it private!
We we do not tolerate plagiarism.
Plagiarism of any form will get you disqualified from the lab. |
set cursor in an rdocx object
a set of functions is available to manipulate the position of a virtual cursor. This cursor will be used when inserting, deleting or updating elements in the document.
Usage
cursor_begin(x)
cursor_bookmark(x, id)
cursor_end(x)
cursor_reach(x, keyword)
cursor_forward(x)
cursor_backward(x)
Arguments
x
a docx device
id
bookmark id
keyword
keyword to look for as a regular expression
cursor_begin
Set the cursor at the beginning of the document, on the first element of the document (usually a paragraph or a table).
cursor_bookmark
Set the cursor at a bookmark that has previously been set.
cursor_end
Set the cursor at the end of the document, on the last element of the document.
cursor_reach
Set the cursor on the first element of the documentthat contains text specified in argument keyword.The argument keyword is a regexpr pattern.
cursor_forward
Move the cursor forward, it increments the cursor in the document.
cursor_backward
Move the cursor backward, it decrements the cursor in the document.
Examples
# NOT RUN {
library(officer)
library(magrittr)
doc <- read_docx() %>%
body_add_par("paragraph 1", style = "Normal") %>%
body_add_par("paragraph 2", style = "Normal") %>%
body_add_par("paragraph 3", style = "Normal") %>%
body_add_par("paragraph 4", style = "Normal") %>%
body_add_par("paragraph 5", style = "Normal") %>%
body_add_par("paragraph 6", style = "Normal") %>%
body_add_par("paragraph 7", style = "Normal") %>%
# default template contains only an empty paragraph
# Using cursor_begin and body_remove, we can delete it
cursor_begin() %>% body_remove() %>%
# Let add text at the beginning of the
# paragraph containing text "paragraph 4"
cursor_reach(keyword = "paragraph 4") %>%
slip_in_text("This is ", pos = "before", style = "Default Paragraph Font") %>%
# move the cursor forward and end a section
cursor_forward() %>%
body_add_par("The section stop here", style = "Normal") %>%
body_end_section(landscape = TRUE) %>%
# move the cursor at the end of the document
cursor_end() %>%
body_add_par("The document ends now", style = "Normal")
print(doc, target = tempfile(fileext = ".docx"))
# cursor_bookmark ----
library(magrittr)
doc <- read_docx() %>%
body_add_par("centered text", style = "centered") %>%
body_bookmark("text_to_replace") %>%
body_add_par("A title", style = "heading 1") %>%
body_add_par("Hello world!", style = "Normal") %>%
cursor_bookmark("text_to_replace") %>%
body_add_table(value = iris, style = "table_template")
print(doc, target = tempfile(fileext = ".docx"))
# }
Documentation reproduced from package officer, version 0.3.4, License: GPL-3 |
Designed for the data science workflow of the
tidyverse
The greatest benefit to tidyquant is the ability to apply the data science workflow to easily model and scale your financial analysis as described in R for Data Science. Scaling is the process of creating an analysis for one asset and then extending it to multiple groups. This idea of scaling is incredibly useful to financial analysts because typically one wants to compare many assets to make informed decisions. Fortunately, the tidyquant package integrates with the tidyverse making scaling super simple!
All tidyquant functions return data in the tibble (tidy data frame) format, which allows for interaction within the tidyverse. This means we can:
%>%) for chaining operationsdplyr and tidyr: select, filter, group_by, nest/unnest, spread/gather, etcpurrr: mapping functions with map
We’ll go through some useful techniques for getting and manipulating groups of data.
Load the tidyquant package to get started.
# Loads tidyquant, lubridate, xts, quantmod, TTR, and PerformanceAnalytics
library(tidyverse)
library(tidyquant)
A very basic example is retrieving the stock prices for multiple stocks. There are three primary ways to do this:
c("AAPL", "GOOG", "FB") %>%
tq_get(get = "stock.prices", from = "2016-01-01", to = "2017-01-01")
## # A tibble: 756 x 8
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL 2016-01-04 25.7 26.3 25.5 26.3 270597600 24.4
## 2 AAPL 2016-01-05 26.4 26.5 25.6 25.7 223164000 23.8
## 3 AAPL 2016-01-06 25.1 25.6 25.0 25.2 273829600 23.4
## 4 AAPL 2016-01-07 24.7 25.0 24.1 24.1 324377600 22.4
## 5 AAPL 2016-01-08 24.6 24.8 24.2 24.2 283192000 22.5
## 6 AAPL 2016-01-11 24.7 24.8 24.3 24.6 198957600 22.9
## 7 AAPL 2016-01-12 25.1 25.2 24.7 25.0 196616800 23.2
## 8 AAPL 2016-01-13 25.1 25.3 24.3 24.3 249758400 22.6
## 9 AAPL 2016-01-14 24.5 25.1 23.9 24.9 252680400 23.1
## 10 AAPL 2016-01-15 24.0 24.4 23.8 24.3 319335600 22.5
## # … with 746 more rows
The output is a single level tibble with all or the stock prices in one tibble. The auto-generated column name is “symbol”, which can be pre-emptively renamed by giving the vector a name (e.g. stocks <- c("AAPL", "GOOG", "FB")) and then piping to tq_get.
First, get a stock list in data frame format either by making the tibble or retrieving from tq_index / tq_exchange. The stock symbols must be in the first column.
stock_list <- tibble(stocks = c("AAPL", "JPM", "CVX"),
industry = c("Technology", "Financial", "Energy"))
stock_list
## # A tibble: 3 x 2## stocks industry ## <chr> <chr> ## 1 AAPL Technology## 2 JPM Financial ## 3 CVX Energy
Second, send the stock list to tq_get. Notice how the symbol and industry columns are automatically expanded the length of the stock prices.
stock_list %>%
tq_get(get = "stock.prices", from = "2016-01-01", to = "2017-01-01")
## # A tibble: 756 x 9
## stocks industry date open high low close volume adjusted
## <chr> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL Technology 2016-01-04 25.7 26.3 25.5 26.3 270597600 24.4
## 2 AAPL Technology 2016-01-05 26.4 26.5 25.6 25.7 223164000 23.8
## 3 AAPL Technology 2016-01-06 25.1 25.6 25.0 25.2 273829600 23.4
## 4 AAPL Technology 2016-01-07 24.7 25.0 24.1 24.1 324377600 22.4
## 5 AAPL Technology 2016-01-08 24.6 24.8 24.2 24.2 283192000 22.5
## 6 AAPL Technology 2016-01-11 24.7 24.8 24.3 24.6 198957600 22.9
## 7 AAPL Technology 2016-01-12 25.1 25.2 24.7 25.0 196616800 23.2
## 8 AAPL Technology 2016-01-13 25.1 25.3 24.3 24.3 249758400 22.6
## 9 AAPL Technology 2016-01-14 24.5 25.1 23.9 24.9 252680400 23.1
## 10 AAPL Technology 2016-01-15 24.0 24.4 23.8 24.3 319335600 22.5
## # … with 746 more rows
Get an index…
tq_index("DOW")
## # A tibble: 30 x 8## symbol company identifier sedol weight sector shares_held local_currency## <chr> <chr> <chr> <chr> <dbl> <chr> <dbl> <chr> ## 1 UNH UnitedHea… 91324P10 29177… 0.0752 Health… 5464167 USD ## 2 HD Home Depo… 43707610 24342… 0.0665 Consum… 5464167 USD ## 3 CRM salesforc… 79466L30 23105… 0.0595 Inform… 5464167 USD ## 4 AMGN Amgen Inc. 03116210 20236… 0.0537 Health… 5464167 USD ## 5 MCD McDonald'… 58013510 25507… 0.0529 Consum… 5464167 USD ## 6 MSFT Microsoft… 59491810 25881… 0.0499 Inform… 5464167 USD ## 7 GS Goldman S… 38141G10 24079… 0.0484 Financ… 5464167 USD ## 8 V Visa Inc.… 92826C83 B2PZN… 0.0459 Inform… 5464167 USD ## 9 HON Honeywell… 43851610 20204… 0.0403 Indust… 5464167 USD ## 10 MMM 3M Company 88579Y10 25957… 0.0397 Indust… 5464167 USD ## # … with 20 more rows
…or, get an exchange.
tq_exchange("NYSE")
Send the index or exchange to tq_get. Important Note: This can take several minutes depending on the size of the index or exchange, which is why only the first three stocks are evaluated in the vignette.
tq_index("DOW") %>%
slice(1:3) %>%
tq_get(get = "stock.prices")
## # A tibble: 8,157 x 15
## symbol company identifier sedol weight sector shares_held local_currency
## <chr> <chr> <chr> <chr> <dbl> <chr> <dbl> <chr>
## 1 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 2 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 3 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 4 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 5 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 6 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 7 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 8 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 9 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## 10 UNH United… 91324P10 2917… 0.0752 Healt… 5464167 USD
## # … with 8,147 more rows, and 7 more variables: date <date>, open <dbl>,
## # high <dbl>, low <dbl>, close <dbl>, volume <dbl>, adjusted <dbl>
You can use any applicable “getter” to get data for every stock in an index or an exchange! This includes: “stock.prices”, “key.ratios”, “key.stats”, and more.
Once you get the data, you typically want to do something with it. You can easily do this at scale. Let’s get the yearly returns for multiple stocks using tq_transmute. First, get the prices. We’ll use the FANG data set, but you typically will use tq_get to retrieve data in “tibble” format.
data("FANG")
FANG
## # A tibble: 4,032 x 8
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 FB 2013-01-02 27.4 28.2 27.4 28 69846400 28
## 2 FB 2013-01-03 27.9 28.5 27.6 27.8 63140600 27.8
## 3 FB 2013-01-04 28.0 28.9 27.8 28.8 72715400 28.8
## 4 FB 2013-01-07 28.7 29.8 28.6 29.4 83781800 29.4
## 5 FB 2013-01-08 29.5 29.6 28.9 29.1 45871300 29.1
## 6 FB 2013-01-09 29.7 30.6 29.5 30.6 104787700 30.6
## 7 FB 2013-01-10 30.6 31.5 30.3 31.3 95316400 31.3
## 8 FB 2013-01-11 31.3 32.0 31.1 31.7 89598000 31.7
## 9 FB 2013-01-14 32.1 32.2 30.6 31.0 98892800 31.0
## 10 FB 2013-01-15 30.6 31.7 29.9 30.1 173242600 30.1
## # … with 4,022 more rows
Second, use group_by to group by stock symbol. Third, apply the mutation. We can do this in one easy workflow. The periodReturns function is applied to each group of stock prices, and a new data frame was returned with the annual returns in the correct periodicity.
FANG_returns_yearly <- FANG %>%
group_by(symbol) %>%
tq_transmute(select = adjusted,
mutate_fun = periodReturn,
period = "yearly",
col_rename = "yearly.returns")
Last, we can visualize the returns.
FANG_returns_yearly %>%
ggplot(aes(x = year(date), y = yearly.returns, fill = symbol)) +
geom_bar(position = "dodge", stat = "identity") +
labs(title = "FANG: Annual Returns",
subtitle = "Mutating at scale is quick and easy!",
y = "Returns", x = "", color = "") +
scale_y_continuous(labels = scales::percent) +
coord_flip() +
theme_tq() +
scale_fill_tq()
Eventually you will want to begin modeling (or more generally applying functions) at scale! One of the best features of the tidyverse is the ability to map functions to nested tibbles using purrr. From the Many Models chapter of “R for Data Science”, we can apply the same modeling workflow to financial analysis. Using a two step workflow:
Let’s go through an example to illustrate.
In this example, we’ll use a simple linear model to identify the trend in annual returns to determine if the stock returns are decreasing or increasing over time.
First, let’s collect stock data with tq_get()
AAPL <- tq_get("AAPL", from = "2007-01-01", to = "2016-12-31")
AAPL
## # A tibble: 2,518 x 8
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL 2007-01-03 3.08 3.09 2.92 2.99 1238319600 2.59
## 2 AAPL 2007-01-04 3.00 3.07 2.99 3.06 847260400 2.64
## 3 AAPL 2007-01-05 3.06 3.08 3.01 3.04 834741600 2.62
## 4 AAPL 2007-01-08 3.07 3.09 3.05 3.05 797106800 2.64
## 5 AAPL 2007-01-09 3.09 3.32 3.04 3.31 3349298400 2.86
## 6 AAPL 2007-01-10 3.38 3.49 3.34 3.46 2952880000 2.99
## 7 AAPL 2007-01-11 3.43 3.46 3.40 3.42 1440252800 2.96
## 8 AAPL 2007-01-12 3.38 3.40 3.33 3.38 1312690400 2.92
## 9 AAPL 2007-01-16 3.42 3.47 3.41 3.47 1244076400 3.00
## 10 AAPL 2007-01-17 3.48 3.49 3.39 3.39 1646260000 2.93
## # … with 2,508 more rows
Next, come up with a function to help us collect annual log returns. The function below mutates the stock prices to period returns using tq_transmute(). We add the type = "log" and period = "monthly" arguments to ensure we retrieve a tibble of monthly log returns. Last, we take the mean of the monthly returns to get MMLR.
get_annual_returns <- function(stock.returns) {
stock.returns %>%
tq_transmute(select = adjusted,
mutate_fun = periodReturn,
type = "log",
period = "yearly")
}
Let’s test get_annual_returns out. We now have the annual log returns over the past ten years.
AAPL_annual_log_returns <- get_annual_returns(AAPL)
AAPL_annual_log_returns
## # A tibble: 10 x 2
## date yearly.returns
## <date> <dbl>
## 1 2007-12-31 0.860
## 2 2008-12-31 -0.842
## 3 2009-12-31 0.904
## 4 2010-12-31 0.426
## 5 2011-12-30 0.228
## 6 2012-12-31 0.282
## 7 2013-12-31 0.0776
## 8 2014-12-31 0.341
## 9 2015-12-31 -0.0306
## 10 2016-12-30 0.118
Let’s visualize to identify trends. We can see from the linear trend line that AAPL’s stock returns are declining.
AAPL_annual_log_returns %>%
ggplot(aes(x = year(date), y = yearly.returns)) +
geom_hline(yintercept = 0, color = palette_light()[[1]]) +
geom_point(size = 2, color = palette_light()[[3]]) +
geom_line(size = 1, color = palette_light()[[3]]) +
geom_smooth(method = "lm", se = FALSE) +
labs(title = "AAPL: Visualizing Trends in Annual Returns",
x = "", y = "Annual Returns", color = "") +
theme_tq()
Now, we can get the linear model using the lm() function. However, there is one problem: the output is not “tidy”.
mod <- lm(yearly.returns ~ year(date), data = AAPL_annual_log_returns)
mod
##
## Call:
## lm(formula = yearly.returns ~ year(date), data = AAPL_annual_log_returns)
##
## Coefficients:
## (Intercept) year(date)
## 58.86281 -0.02915
We can utilize the broom package to get “tidy” data from the model. There’s three primary functions:
augment: adds columns to the original data such as predictions, residuals and cluster assignmentsglance: provides a one-row summary of model-level statisticstidy: summarizes a model’s statistical findings such as coefficients of a regression
We’ll use tidy to retrieve the model coefficients.
library(broom)
tidy(mod)
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 58.9 113. 0.520 0.617
## 2 year(date) -0.0291 0.0562 -0.518 0.618
Adding to our workflow, we have the following:
get_model <- function(stock_data) {
annual_returns <- get_annual_returns(stock_data)
mod <- lm(yearly.returns ~ year(date), data = annual_returns)
tidy(mod)
}
Testing it out on a single stock. We can see that the “term” that contains the direction of the trend (the slope) is “year(date)”. The interpetation is that as year increases one unit, the annual returns decrease by 3%.
get_model(AAPL)
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 58.9 113. 0.520 0.617
## 2 year(date) -0.0291 0.0562 -0.518 0.618
Now that we have identified the trend direction, it looks like we are ready to scale.
Once the analysis for one stock is done scale to many stocks is simple. For brevity, we’ll randomly sample ten stocks from the S&P500 with a call to dplyr::sample_n().
set.seed(10)
stocks_tbl <- tq_index("SP500") %>%
sample_n(5)
stocks_tbl
## # A tibble: 5 x 8
## symbol company identifier sedol weight sector shares_held local_currency
## <chr> <chr> <chr> <chr> <dbl> <chr> <dbl> <chr>
## 1 VNT Vontier C… 92888110 BH4G… 1.34e-4 Informa… 1420288 USD
## 2 ADI Analog De… 03265410 2032… 1.59e-3 Informa… 3870636 USD
## 3 SYF Synchrony… 87165B10 BP96… 5.21e-4 Financi… 5673466 USD
## 4 EVRG Evergy In… 30034W10 BFMX… 4.44e-4 Utiliti… 2403514 USD
## 5 AAL American … 02376R10 BCV7… 2.23e-4 Industr… 5244777 USD
We can now apply our analysis function to the stocks using dplyr::mutate and purrr::map. The mutate() function adds a column to our tibble, and the map() function maps our custom get_model function to our tibble of stocks using the symbol column. The tidyr::unnest function unrolls the nested data frame so all of the model statistics are accessable in the top data frame level. The filter, arrange and select steps just manipulate the data frame to isolate and arrange the data for our viewing.
stocks_model_stats <- stocks_tbl %>%
select(symbol, company) %>%
tq_get(from = "2007-01-01", to = "2016-12-31") %>%
# Nest
group_by(symbol, company) %>%
nest() %>%
# Apply the get_model() function to the new "nested" data column
mutate(model = map(data, get_model)) %>%
# Unnest and collect slope
unnest(model) %>%
filter(term == "year(date)") %>%
arrange(desc(estimate)) %>%
select(-term)
stocks_model_stats
## # A tibble: 4 x 7## # Groups: symbol, company [4]## symbol company data estimate std.error statistic p.value## <chr> <chr> <list> <dbl> <dbl> <dbl> <dbl>## 1 AAL American Airlines G… <tibble [2,5… 0.142 0.0753 1.89 0.0958## 2 EVRG Evergy Inc. <tibble [2,5… 0.0299 0.0131 2.28 0.0522## 3 ADI Analog Devices Inc. <tibble [2,5… 0.0272 0.0295 0.920 0.385 ## 4 SYF Synchrony Financial <tibble [611… -0.0359 0.115 -0.312 0.807
We’re done! We now have the coefficient of the linear regression that tracks the direction of the trend line. We can easily extend this type of analysis to larger lists or stock indexes. For example, the entire S&P500 could be analyzed removing the sample_n() following the call to tq_index("SP500").
Eventually you will run into a stock index, stock symbol, FRED data code, etc that cannot be retrieved. Possible reasons are:
This becomes painful when scaling if the functions return errors. So, the tq_get() function is designed to handle errors gracefully. What this means is an NA value is returned when an error is generated along with a gentle error warning.
tq_get("XYZ", "stock.prices")
## [1] NA
There are pros and cons to this approach that you may not agree with, but I believe helps in the long run. Just be aware of what happens:
Pros: Long running scripts are not interrupted because of one error
Cons: Errors can be inadvertently handled or flow downstream if the users does not read the warnings
Let’s see an example when using tq_get() to get the stock prices for a long list of stocks with one BAD APPLE. The argument complete_cases comes in handy. The default is TRUE, which removes “bad apples” so future analysis have complete cases to compute on. Note that a gentle warning stating that an error occurred and was dealt with by removing the rows from the results.
c("AAPL", "GOOG", "BAD APPLE") %>%
tq_get(get = "stock.prices", complete_cases = TRUE)
## Warning: Problem with `mutate()` input `data..`.
## ℹ x = 'BAD APPLE', get = 'stock.prices': Error in getSymbols.yahoo(Symbols = "BAD APPLE", env = <environment>, : Unable to import "BAD APPLE".
## BAD APPLE download failed after two attempts. Error message:
## HTTP error 400.
## Removing BAD APPLE.
## ℹ Input `data..` is `purrr::map(...)`.
## Warning: x = 'BAD APPLE', get = 'stock.prices': Error in getSymbols.yahoo(Symbols = "BAD APPLE", env = <environment>, : Unable to import "BAD APPLE".
## BAD APPLE download failed after two attempts. Error message:
## HTTP error 400.
## Removing BAD APPLE.
## # A tibble: 5,438 x 8
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL 2010-01-04 7.62 7.66 7.58 7.64 493729600 6.60
## 2 AAPL 2010-01-05 7.66 7.70 7.62 7.66 601904800 6.62
## 3 AAPL 2010-01-06 7.66 7.69 7.53 7.53 552160000 6.51
## 4 AAPL 2010-01-07 7.56 7.57 7.47 7.52 477131200 6.50
## 5 AAPL 2010-01-08 7.51 7.57 7.47 7.57 447610800 6.54
## 6 AAPL 2010-01-11 7.6 7.61 7.44 7.50 462229600 6.48
## 7 AAPL 2010-01-12 7.47 7.49 7.37 7.42 594459600 6.41
## 8 AAPL 2010-01-13 7.42 7.53 7.29 7.52 605892000 6.50
## 9 AAPL 2010-01-14 7.50 7.52 7.46 7.48 432894000 6.46
## 10 AAPL 2010-01-15 7.53 7.56 7.35 7.35 594067600 6.36
## # … with 5,428 more rows
Now switching complete_cases = FALSE will retain any errors as NA values in a nested data frame. Notice that the error message and output change. The error message now states that the NA values exist in the output and the return is a “nested” data structure.
c("AAPL", "GOOG", "BAD APPLE") %>%
tq_get(get = "stock.prices", complete_cases = FALSE)
## Warning: Problem with `mutate()` input `data..`.
## ℹ x = 'BAD APPLE', get = 'stock.prices': Error in getSymbols.yahoo(Symbols = "BAD APPLE", env = <environment>, : Unable to import "BAD APPLE".
## BAD APPLE download failed after two attempts. Error message:
## HTTP error 400.
##
## ℹ Input `data..` is `purrr::map(...)`.
## Warning: x = 'BAD APPLE', get = 'stock.prices': Error in getSymbols.yahoo(Symbols = "BAD APPLE", env = <environment>, : Unable to import "BAD APPLE".
## BAD APPLE download failed after two attempts. Error message:
## HTTP error 400.
## # A tibble: 5,439 x 9
## symbol date open high low close volume adjusted stock.prices
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
## 1 AAPL 2010-01-04 7.62 7.66 7.58 7.64 493729600 6.60 NA
## 2 AAPL 2010-01-05 7.66 7.70 7.62 7.66 601904800 6.62 NA
## 3 AAPL 2010-01-06 7.66 7.69 7.53 7.53 552160000 6.51 NA
## 4 AAPL 2010-01-07 7.56 7.57 7.47 7.52 477131200 6.50 NA
## 5 AAPL 2010-01-08 7.51 7.57 7.47 7.57 447610800 6.54 NA
## 6 AAPL 2010-01-11 7.6 7.61 7.44 7.50 462229600 6.48 NA
## 7 AAPL 2010-01-12 7.47 7.49 7.37 7.42 594459600 6.41 NA
## 8 AAPL 2010-01-13 7.42 7.53 7.29 7.52 605892000 6.50 NA
## 9 AAPL 2010-01-14 7.50 7.52 7.46 7.48 432894000 6.46 NA
## 10 AAPL 2010-01-15 7.53 7.56 7.35 7.35 594067600 6.36 NA
## # … with 5,429 more rows
In both cases, the prudent user will review the warnings to determine what happened and whether or not this is acceptable. In the complete_cases = FALSE example, if the user attempts to perform downstream computations at scale, the computations will likely fail grinding the analysis to a hault. But, the advantage is that the user will more easily be able to filter to the problem childs to determine what happened and decide whether this is acceptable or not. |
Today we released 0.3.0 which aligns with PyTorch releases cycle and includes:
Full support to PyTorch v1.5.
Semi-automated GPU tests coverage.
Documentation has been reorganized [docs].
Data augmentation API compatible with torchvision v0.6.0.
Well integration with ecosystem e.g. Pytorch-Lightning.
Highlights
Data Augmentation
We provide kornia.augmentation a high-level framework that implements kornia-core functionalities and is fully compatible with torchvision supporting batched mode, multi device cpu, gpu, and xla/tpu (comming), auto differentiable and able to retrieve (and chain) applied geometric transforms. To check how to reproduce torchvision in kornia refer to this Colab: Kornia vs. Torchvision @shijianjian
import kornia as K import torchvision as T # kornia transform_fcn = torch.nn.Sequential( K.augmentation.RandomAffine( [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5], return_transform=True), K.color.Normalize(0.1307, 0.3081), ) # torchvision transform_fcn = T.transforms.Compose([ T.transforms.RandomAffine( [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5]), T.transforms.ToTensor(), T.transforms.Normalize((0.1307,), (0.3081,)), ])
Ecosystem compatibility
Kornia has been designed to be very flexible in order to be integrated in other existing frameworks. See the example below about how easy you can define a custom data augmentation pipeline to later be integrated into any training framework such as Pytorch-Lighting. We provide examples in [here] and [here].
class DataAugmentatonPipeline(nn.Module): """Module to perform data augmentation using Kornia on torch tensors.""" def __init__(self, apply_color_jitter: bool = False) -> None: super().__init__() self._apply_color_jitter = apply_color_jitter self._max_val: float = 1024. self.transforms = nn.Sequential( K.augmentation.Normalize(0., self._max_val), K.augmentation.RandomHorizontalFlip(p=0.5) ) self.jitter = K.augmentation.ColorJitter(0.5, 0.5, 0.5, 0.5) @torch.no_grad() # disable gradients for effiency def forward(self, x: torch.Tensor) -> torch.Tensor: x_out = self.transforms(x) if self._apply_color_jitter: x_out = self.jitter(x_out) return x_out
GPU tests
Now easy to run GPU tests with pytest --typetest cuda
Please, do not hesitate to check the release notes on GitHub to learn about the new library features and get more details.
Have a happy coding day
The Kornia team |
Multilingual fields in Django
A while ago, back when Django was in it’s 1.3 version, my first serious project was http://datoz.com, a real estate information site featuring, among other things, bilingual descriptions of office spaces.
Back then I wrote about how we managed that in a quick and easy manner, with no external dependencies, essentially by creating separate fields for each language and unifying acces to them with a property.
from django.utils import translation
class Product:
description_en = models.CharField()
description_es = models.CharField()
@property
def description(self):
lang = translation.get_language()
return getattr(
self, 'description_%s' % lang,
_(u'Not available')
)
Sort of quick and dirty, yeah, but back then existing apps for doing that striked me as buggy, or too big, or outright unmantained. After sharing that post on Reddit people helpfully pointed me to a couple of libraries that do a much better job, namely:
django-linguo, by Zack Mathew, which uses the same approach, but in a more structured manner, and with nice features like being able to sort and filter by the special fields without manually specifyng them.
django-hvad, by Kristian Øllegaard and Jonas Obrist which uses a custom manager and lets you do things like Normal.objects.language("en").all(), which I think is much neater than having the language of the query implicitly depending on the global language setting.
You should check them out! |
Hak Cipta 2021 The TF-Agents Authors.
Lihat di TensorFlow.org Jalankan di Google Colab Lihat sumber di GitHub Unduh buku catatan
pengantar
Tujuan dari Reinforcement Learning (RL) adalah merancang agen yang belajar dengan berinteraksi dengan lingkungan. Dalam pengaturan RL standar, agen menerima pengamatan di setiap langkah waktu dan memilih tindakan. Tindakan tersebut diterapkan pada lingkungan dan lingkungan mengembalikan penghargaan dan pengamatan baru. Agen melatih kebijakan untuk memilih tindakan guna memaksimalkan jumlah hadiah, juga dikenal sebagai pengembalian.
Di TF-Agents, lingkungan dapat diimplementasikan dengan Python atau TensorFlow. Lingkungan Python biasanya lebih mudah diimplementasikan, dipahami, dan di-debug, tetapi lingkungan TensorFlow lebih efisien dan memungkinkan paralelisasi alami. Alur kerja yang paling umum adalah mengimplementasikan lingkungan dengan Python dan menggunakan salah satu pembungkus kami untuk otomatis mengubahnya menjadi TensorFlow.
Mari kita lihat lingkungan Python terlebih dahulu. Lingkungan TensorFlow mengikuti API yang sangat mirip.
Mendirikan
Jika Anda belum menginstal tf-agent atau gym, jalankan:
pip install -q tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
tf.compat.v1.enable_v2_behavior()
Lingkungan Python
Lingkungan Python memiliki metode step(action) -> next_time_step yang menerapkan tindakan ke lingkungan, dan mengembalikan informasi berikut tentang langkah berikutnya:
observation: Ini adalah bagian dari status lingkungan yang dapat diamati oleh agen untuk memilih tindakannya pada langkah berikutnya.
reward: Agen sedang belajar memaksimalkan jumlah reward ini dalam beberapa langkah.
step_type: Interaksi dengan lingkungan biasanya merupakan bagian dari urutan / episode. misalnya banyak gerakan dalam permainan catur. step_type dapat berupaFIRST,MIDatauLASTuntuk menunjukkan apakah langkah waktu ini adalah langkah pertama, menengah, atau terakhir secara berurutan.
discount: Ini adalah float yang menunjukkan seberapa banyak bobot hadiah pada langkah waktu berikutnya relatif terhadap hadiah pada langkah waktu saat ini.
Ini dikelompokkan ke dalam tuple TimeStep(step_type, reward, discount, observation) bernama TimeStep(step_type, reward, discount, observation) .
Antarmuka yang harus diterapkan oleh semua lingkungan Python ada di environments/py_environment.PyEnvironment . Metode utamanya adalah:
class PyEnvironment(object):
def reset(self):
"""Return initial_time_step."""
self._current_time_step = self._reset()
return self._current_time_step
def step(self, action):
"""Apply action and return new time_step."""
if self._current_time_step is None:
return self.reset()
self._current_time_step = self._step(action)
return self._current_time_step
def current_time_step(self):
return self._current_time_step
def time_step_spec(self):
"""Return time_step_spec."""
@abc.abstractmethod
def observation_spec(self):
"""Return observation_spec."""
@abc.abstractmethod
def action_spec(self):
"""Return action_spec."""
@abc.abstractmethod
def _reset(self):
"""Return initial_time_step."""
@abc.abstractmethod
def _step(self, action):
"""Apply action and return new time_step."""
Selain metode step() , lingkungan juga menyediakan metode reset() yang memulai urutan baru dan menyediakan TimeStep awal. Tidak perlu memanggil metode reset secara eksplisit. Kami berasumsi bahwa lingkungan disetel ulang secara otomatis, baik saat lingkungan mencapai akhir episode atau saat step () dipanggil pertama kali.
Perhatikan bahwa subclass tidak mengimplementasikan step() atau reset() secara langsung. Mereka malah mengganti metode _step() dan _reset() . Langkah waktu yang dikembalikan dari metode ini akan di-cache dan diekspos melalui current_time_step() .
Metode observation_spec dan action_spec mengembalikan sarang (Bounded)ArraySpecs -masing menjelaskan nama, bentuk, tipe data, dan rentang observasi dan tindakan.
Dalam TF-Agents kami berulang kali merujuk ke sarang yang didefinisikan sebagai struktur seperti pohon apa pun yang terdiri dari daftar, tupel, tupel bernama, atau kamus. Ini dapat disusun secara sewenang-wenang untuk mempertahankan struktur pengamatan dan tindakan. Kami menemukan ini sangat berguna untuk lingkungan yang lebih kompleks di mana Anda memiliki banyak pengamatan dan tindakan.
Menggunakan Lingkungan Standar
Agen TF memiliki pembungkus bawaan untuk banyak lingkungan standar seperti OpenAI Gym, DeepMind-control dan Atari, sehingga mereka mengikuti antarmuka py_environment.PyEnvironment kami. Lingkungan yang dibungkus ini dapat dengan mudah dimuat menggunakan rangkaian lingkungan kami. Mari muat lingkungan CartPole dari gym OpenAI dan lihat aksi dan time_step_spec.
environment = suite_gym.load('CartPole-v0')
print('action_spec:', environment.action_spec())
print('time_step_spec.observation:', environment.time_step_spec().observation)
print('time_step_spec.step_type:', environment.time_step_spec().step_type)
print('time_step_spec.discount:', environment.time_step_spec().discount)
print('time_step_spec.reward:', environment.time_step_spec().reward)
action_spec: BoundedArraySpec(shape=(), dtype=dtype('int64'), name='action', minimum=0, maximum=1) time_step_spec.observation: BoundedArraySpec(shape=(4,), dtype=dtype('float32'), name='observation', minimum=[-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], maximum=[4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38]) time_step_spec.step_type: ArraySpec(shape=(), dtype=dtype('int32'), name='step_type') time_step_spec.discount: BoundedArraySpec(shape=(), dtype=dtype('float32'), name='discount', minimum=0.0, maximum=1.0) time_step_spec.reward: ArraySpec(shape=(), dtype=dtype('float32'), name='reward')
Jadi kita melihat bahwa lingkungan mengharapkan tindakan tipe int64 di [0, 1] dan mengembalikan TimeSteps mana pengamatan adalah vektor float32 panjang 4 dan faktor diskon adalah float32 di [0.0, 1.0]. Sekarang, mari kita coba mengambil tindakan tetap (1,) untuk keseluruhan episode.
action = np.array(1, dtype=np.int32)
time_step = environment.reset()
print(time_step)
while not time_step.is_last():
time_step = environment.step(action)
print(time_step)
TimeStep(step_type=array(0, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.01285449, 0.04769544, 0.01983412, -0.00245379], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.0138084 , 0.24252741, 0.01978504, -0.2888134 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.01865895, 0.43736172, 0.01400878, -0.57519126], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.02740618, 0.6322845 , 0.00250495, -0.8634283 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.04005187, 0.82737225, -0.01476362, -1.1553226 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.05659932, 1.0226836 , -0.03787007, -1.452598 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.07705299, 1.2182497 , -0.06692202, -1.7568679 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.10141798, 1.4140631 , -0.10205939, -2.069591 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.12969925, 1.6100639 , -0.1434512 , -2.3920157 ], dtype=float32)) TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.16190052, 1.8061239 , -0.19129153, -2.725115 ], dtype=float32)) TimeStep(step_type=array(2, dtype=int32), reward=array(1., dtype=float32), discount=array(0., dtype=float32), observation=array([ 0.198023 , 2.002027 , -0.24579382, -3.0695074 ], dtype=float32))
Membuat Lingkungan Python Anda sendiri
Untuk banyak klien, kasus penggunaan yang umum adalah menerapkan salah satu agen standar (lihat agen /) di TF-Agents untuk masalah mereka. Untuk melakukan ini, mereka harus membingkai masalah mereka sebagai lingkungan. Jadi mari kita lihat bagaimana mengimplementasikan lingkungan dengan Python.
Katakanlah kita ingin melatih agen untuk memainkan permainan kartu (terinspirasi Black Jack) berikut:
Permainan ini dimainkan menggunakan setumpuk kartu tak terbatas bernomor 1 ... 10.
Di setiap kesempatan, agen dapat melakukan 2 hal: mendapatkan kartu acak baru, atau menghentikan putaran saat ini.
Tujuannya adalah untuk mendapatkan jumlah kartu Anda sedekat mungkin dengan 21 di akhir ronde, tanpa melebihi.
Lingkungan yang mewakili permainan bisa terlihat seperti ini:
Tindakan: Kami memiliki 2 tindakan. Tindakan 0: dapatkan kartu baru, dan Tindakan 1: hentikan putaran saat ini.
Pengamatan: Jumlah kartu di babak saat ini.
Imbalan: Tujuannya adalah untuk mendapatkan sebanyak mungkin 21 tanpa melampaui, jadi kita dapat mencapai ini menggunakan hadiah berikut di akhir babak: sum_of_cards - 21 if sum_of_cards <= 21, else -21
class CardGameEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=0, name='observation')
self._state = 0
self._episode_ended = False
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def _step(self, action):
if self._episode_ended:
# The last action ended the episode. Ignore the current action and start
# a new episode.
return self.reset()
# Make sure episodes don't go on forever.
if action == 1:
self._episode_ended = True
elif action == 0:
new_card = np.random.randint(1, 11)
self._state += new_card
else:
raise ValueError('`action` should be 0 or 1.')
if self._episode_ended or self._state >= 21:
reward = self._state - 21 if self._state <= 21 else -21
return ts.termination(np.array([self._state], dtype=np.int32), reward)
else:
return ts.transition(
np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
Mari kita pastikan kita melakukan semuanya dengan benar dengan mendefinisikan lingkungan di atas. Saat membuat lingkungan Anda sendiri, Anda harus memastikan observasi dan time_steps yang dihasilkan mengikuti bentuk dan jenis yang benar seperti yang ditentukan dalam spesifikasi Anda. Ini digunakan untuk menghasilkan grafik TensorFlow dan dengan demikian dapat menimbulkan masalah yang sulit di-debug jika kita salah.
Untuk memvalidasi lingkungan kami, kami akan menggunakan kebijakan acak untuk menghasilkan tindakan dan kami akan mengulang lebih dari 5 episode untuk memastikan semuanya berfungsi sebagaimana mestinya. Kesalahan muncul jika kita menerima time_step yang tidak mengikuti spesifikasi lingkungan.
environment = CardGameEnv()
utils.validate_py_environment(environment, episodes=5)
Sekarang kita tahu bahwa lingkungan berfungsi sebagaimana mestinya, mari kita jalankan lingkungan ini menggunakan kebijakan tetap: minta 3 kartu lalu akhiri putaran.
get_new_card_action = np.array(0, dtype=np.int32)
end_round_action = np.array(1, dtype=np.int32)
environment = CardGameEnv()
time_step = environment.reset()
print(time_step)
cumulative_reward = time_step.reward
for _ in range(3):
time_step = environment.step(get_new_card_action)
print(time_step)
cumulative_reward += time_step.reward
time_step = environment.step(end_round_action)
print(time_step)
cumulative_reward += time_step.reward
print('Final Reward = ', cumulative_reward)
TimeStep(step_type=array(0, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([0], dtype=int32)) TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([2], dtype=int32)) TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([7], dtype=int32)) TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([8], dtype=int32)) TimeStep(step_type=array(2, dtype=int32), reward=array(-13., dtype=float32), discount=array(0., dtype=float32), observation=array([8], dtype=int32)) Final Reward = -13.0
Wrappers Lingkungan
Wrapper lingkungan menggunakan lingkungan Python dan mengembalikan versi lingkungan yang dimodifikasi. Baik lingkungan asli maupun lingkungan yang dimodifikasi adalah instance py_environment.PyEnvironment , dan beberapa pembungkus dapat digabungkan bersama.
Beberapa pembungkus umum dapat ditemukan di environments/wrappers.py . Sebagai contoh:
ActionDiscretizeWrapper: Mengubah ruang aksi berkelanjutan menjadi ruang aksi diskrit.
RunStats:RunStatsstatistik run dari lingkungan seperti jumlah langkah yang diambil, jumlah episode yang diselesaikan, dll.
TimeLimit: Menghentikan episode setelah sejumlah langkah tetap.
Contoh 1: Action Discretize Wrapper
InvertedPendulum adalah lingkungan PyBullet yang menerima tindakan kontinu dalam rentang [-2, 2] . Jika kita ingin melatih agen aksi diskrit seperti DQN di lingkungan ini, kita harus mendiskritisasi (mengukur) ruang aksi. Inilah tepatnya yang dilakukan ActionDiscretizeWrapper . Bandingkan action_spec sebelum dan sesudah penggabungan:
env = suite_gym.load('Pendulum-v0')
print('Action Spec:', env.action_spec())
discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)
print('Discretized Action Spec:', discrete_action_env.action_spec())
Action Spec: BoundedArraySpec(shape=(1,), dtype=dtype('float32'), name='action', minimum=-2.0, maximum=2.0) Discretized Action Spec: BoundedArraySpec(shape=(), dtype=dtype('int32'), name='action', minimum=0, maximum=4)
Dibungkus discrete_action_env adalah sebuah contoh dari py_environment.PyEnvironment dan dapat diperlakukan seperti lingkungan Python biasa.
Lingkungan TensorFlow
Antarmuka untuk lingkungan TF didefinisikan dalam environments/tf_environment.TFEnvironment dan terlihat sangat mirip dengan lingkungan Python. Lingkungan TF berbeda dari Python envs dalam beberapa cara:
Mereka menghasilkan objek tensor, bukan array
Lingkungan TF menambahkan dimensi batch ke tensor yang dihasilkan jika dibandingkan dengan spesifikasi.
Mengubah lingkungan Python menjadi TFEnvs memungkinkan tensorflow untuk memparalelkan operasi. Misalnya, seseorang dapat mendefinisikan collect_experience_op yang mengumpulkan data dari lingkungan dan menambahkan ke replay_buffer , dan train_op yang membaca dari replay_buffer dan melatih agen, dan menjalankannya secara paralel secara alami di TensorFlow.
class TFEnvironment(object):
def time_step_spec(self):
"""Describes the `TimeStep` tensors returned by `step()`."""
def observation_spec(self):
"""Defines the `TensorSpec` of observations provided by the environment."""
def action_spec(self):
"""Describes the TensorSpecs of the action expected by `step(action)`."""
def reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
return self._reset()
def current_time_step(self):
"""Returns the current `TimeStep`."""
return self._current_time_step()
def step(self, action):
"""Applies the action and returns the new `TimeStep`."""
return self._step(action)
@abc.abstractmethod
def _reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
@abc.abstractmethod
def _current_time_step(self):
"""Returns the current `TimeStep`."""
@abc.abstractmethod
def _step(self, action):
"""Applies the action and returns the new `TimeStep`."""
Metode current_time_step() mengembalikan time_step saat ini dan menginisialisasi lingkungan jika diperlukan.
Metode reset() memaksa reset di lingkungan dan mengembalikan langkah_kini.
Jika action tidak bergantung pada time_step sebelumnya, diperlukan tf.control_dependency dalam mode Graph .
Untuk saat ini, mari kita lihat bagaimana TFEnvironments dibuat.
Membuat Lingkungan TensorFlow Anda sendiri
Ini lebih rumit daripada membuat lingkungan dengan Python, jadi kami tidak akan membahasnya di colab ini. Contohnya tersedia di sini . Kasus penggunaan yang lebih umum adalah mengimplementasikan lingkungan Anda dengan Python dan menggabungkannya dalam TensorFlow menggunakan pembungkus TFPyEnvironment kami (lihat di bawah).
Menggabungkan Lingkungan Python di TensorFlow
Kita dapat dengan mudah menggabungkan lingkungan Python apa pun ke dalam lingkungan TensorFlow menggunakan pembungkus TFPyEnvironment .
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
print(isinstance(tf_env, tf_environment.TFEnvironment))
print("TimeStep Specs:", tf_env.time_step_spec())
print("Action Specs:", tf_env.action_spec())
True TimeStep Specs: TimeStep(step_type=TensorSpec(shape=(), dtype=tf.int32, name='step_type'), reward=TensorSpec(shape=(), dtype=tf.float32, name='reward'), discount=BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), observation=BoundedTensorSpec(shape=(4,), dtype=tf.float32, name='observation', minimum=array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38], dtype=float32), maximum=array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38], dtype=float32))) Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int64, name='action', minimum=array(0), maximum=array(1))
Perhatikan bahwa spesifikasi sekarang berjenis: (Bounded)TensorSpec .
Contoh Penggunaan
Contoh Sederhana
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
# reset() creates the initial time_step after resetting the environment.
time_step = tf_env.reset()
num_steps = 3
transitions = []
reward = 0
for i in range(num_steps):
action = tf.constant([i % 2])
# applies the action and returns the new TimeStep.
next_time_step = tf_env.step(action)
transitions.append([time_step, action, next_time_step])
reward += next_time_step.reward
time_step = next_time_step
np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)
print('\n'.join(map(str, np_transitions)))
print('Total reward:', reward.numpy())
[TimeStep(step_type=array([0], dtype=int32), reward=array([0.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.03501577, -0.04957427, 0.00623939, 0.03762257]], dtype=float32)), array([0], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.03600726, -0.24478514, 0.00699184, 0.33226755]], dtype=float32))] [TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.03600726, -0.24478514, 0.00699184, 0.33226755]], dtype=float32)), array([1], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.04090296, -0.0497634 , 0.01363719, 0.04179767]], dtype=float32))] [TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.04090296, -0.0497634 , 0.01363719, 0.04179767]], dtype=float32)), array([0], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[-0.04189822, -0.24507822, 0.01447314, 0.33875188]], dtype=float32))] Total reward: [3.]
Seluruh Episode
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
time_step = tf_env.reset()
rewards = []
steps = []
num_episodes = 5
for _ in range(num_episodes):
episode_reward = 0
episode_steps = 0
while not time_step.is_last():
action = tf.random.uniform([1], 0, 2, dtype=tf.int32)
time_step = tf_env.step(action)
episode_steps += 1
episode_reward += time_step.reward.numpy()
rewards.append(episode_reward)
steps.append(episode_steps)
time_step = tf_env.reset()
num_steps = np.sum(steps)
avg_length = np.mean(steps)
avg_reward = np.mean(rewards)
print('num_episodes:', num_episodes, 'num_steps:', num_steps)
print('avg_length', avg_length, 'avg_reward:', avg_reward)
num_episodes: 5 num_steps: 138 avg_length 27.6 avg_reward: 27.6 |
Vamos a ejecutar un sencillo programa Python en Hadoop Map Reduce. El programa va a calcular la temperatura máxima de cada año a partir de un registro histórico. Para el ejemplo usaremos CentOS aunque es válido para cualquier otra distribución de Linux.
Si no tienes aún instalado Hadoop quizás te interese el siguiente post:Instalación paso a paso de Hadoop en Linux y un ejemplo de uso.
En primer lugar crearemos una carpeta tempMax en el escritorio que usaremos como directorio de trabajo:
Nos ubicamos desde la terminal dentro de esa carpeta:
cd Escritoriocd tempMax
Creamos el archivo de Python donde vamos a programar nuestro código mapper:
touch mapperMaxTemp.py
Antes de escribir el código mapper, debemos tener en cuenta que nuestros datos estarán representados de la siguiente forma:
Es decir, que cada fila tendrá representado el año, el mes y la temperatura con espacios tabulados (datos ficticios, generados con una función random). Así que cada subproblema será un año. Tendremos que emitir pares clave-valor dónde la clave sea el año y el valor la temperatura.
Una vez creado el archivo mapperMaxTemp.py, accedemos a él desde el escritorio con doble click y escribimos el código mapper:
#!/usr/bin/python
import sys
"""
Mapper de MaxTemp
Obtenido de http://exponentis.es/
"""
# Por cada medida calculamos los pares <anyo, temp>
for linea in sys.stdin:
linea = linea.strip()
anyo, mes, temp = linea.split("\t", 2)
print("%s\t%s" % (anyo, temp))
El código, para cada línea de datos de entrada, en primer lugar elimina espacios en blanco (por delante y por detrás) con el método strip() y posteriormente extrae el año, mes y temperatura de cada fila, “rompiendo” la entrada por cada tabulación (/t) que haya. Por último emitimos con el print la clave-valor separados por una tabulación.
Guardamos el archivo y volviendo a la terminal de Linux, nos damos permisos para poder ejecutar el mapper.
chmod u+x mapperMaxTemp.py
Ahora vamos a crear la funcionalidad reducer:
touch reducerMaxTemp.py
Abrimos el archivo recién creado reducerMaxTemp.py con doble click en su carpeta del escritorio. Ahora tenemos que escribir un código que calcule el máximo de las temperaturas recibidas:
#!/usr/bin/python
import sys
"""
Reducer de MaxTemp
Obtenido de http://exponentis.es/
"""
subproblema = None
tempMaxima = None
for claveValor in sys.stdin:
anyo, temp = claveValor.split("\t", 1)
#convertimos la temp a float
temp = float(temp)
#El primer subproblema es el primer anyo de reducer (y la temp máxima de momento también)
if subproblema == None:
subproblema = anyo
tempMaxima = temp
#si el anyo es del subproblema actual, comprobamos si es la temperatura maxima
if subproblema == anyo:
if temp > tempMaxima:
tempMaxima = temp
else: #si ya acabamos con el subproblema, emitimos
print("%s\t%s" % (subproblema, tempMaxima))
#Pasamos al siguiente subproblema (de momento la temp es la máxima)
subproblema = anyo
tempMaxima = temp
#el anterior bucle no emite el último subproblema
print("%s\t%s" % (subproblema, tempMaxima))
El programa crear muchos pares clave-valor y tenemos que indicar cuando termina cada par. Separamos año y temperatura actuales. En el primer subproblema consideramos máxima la primera temperatura y la comparamos con la del siguiente subproblema, comparando la temperatura actual con la temperatura máxima almacenada. Si es superior la temperatura actual se actualiza la temperatura almacenada.
Esto se realiza para todas las temperaturas hasta que cambia la fecha del año, en el que cambiamos de subproblema, emitiendo la solución del anterior subproblema (año y temperatura máxima).
El último print emite la solución del último subproblema.
Al igual que con el mapper, debemos darnos permisos de ejecución sobre el reducer a través de la consola de Linux:
chmod u+x reducerMaxTemp.py
Ahora descargaremos el fichero medidas.txt en nuestra carpeta de trabajo. Este fichero contiene 730 registros diarios de temperaturas a lo largo de 2017 y 2018, y tiene el formato que vimos anteriormente con el año, mes y temperatura con espacios tabulados. Nota: son datos inventados, con una temperatura máxima diaria establecida con una función random entre -5 ºC y 48 ºC.
Descarga el archivo medidas.txt de aquí: https://mega.nz/#!Pnpw3aYK
Vamos a ejecutar el mapper desde la consola Linux para comprobar que está bien codificado. Nota: la barra vertical es escribe con “Alt Gr + 1”.
cat medidas.txt | ./mapperMaxTemp.py
El resultado debe ser mostrar en pantalla todas las clave-valor que emite mapper, es decir, todas las temperaturas de cada año sin el mes:
Ahora ejecutamos el Map y el Reduce a la vez sin ejecutar Hadoop a modo de prueba:
cat medidas.txt | ./mapperMaxTemp.py | sort -k1,1 | ./reducerMaxTemp.py
El resultado es el máximo de cada año, que coinciden de forma lógica por haberse usado la misma función random:
Viendo que ya funciona bien, podemos ejecutarlo en Hadoop de la siguiente forma:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp1
Con este comando indicamos a Hadoop cuáles son nuestros archivos mapper y reducer y que además tendrá que distribuir por los distintos servidores (por eso salen indicados dos veces). Le indicamos además cuáles son los datos –medidas.txt- y la salida que queremos.
Se nos habrá creado una nueva carpeta en nuestro directorio llamada miSalidaMexTemp1 que contendrá un archivo llamado part-00000 con el resultado del análisis:
Con esto ya habríamos terminado. Pero supongamos que ahora queremos obtener también el máximo de temperatura por cada mes.
Modificamos el mapper añadiendo el mes:
Hemos separado año y mes con un guión y posteriormente la temperatura con una tabulación.
El comando Hadoop seria muy similar al anterior pero debemos indicarle, además de lo anterior, que nuestro mapper va con una clave compuesta (dos valores) y que además van separados por un guión. Debemos además especificar una salida diferente parque Hadoop no sobrescribe la anterior y da un error:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -Dstream.num.map.key.fields=2 -Dmap.output.key.field.separator="-" -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp2
El resultado en el nuevo archivo part-00000 es la máxima temperatura por mes:
Nota: si quisiéramos añadir un combiner, podría hacerse de la siguiente forma:
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar -Dstream.num.map.key.fields=2 -Dmap.output.key.field.separator="-" -files ./mapperMaxTemp.py -mapper ./mapperMaxTemp.py -file ./reducerMaxTemp.py -reducer ./reducerMaxTemp.py -combiner ./reducerMaxTemp.py -input medidas.txt -output ./miSalidaMaxTemp2
|
Сделал код
import time
import threading
import urllib3
http = urllib3.PoolManager()
#f = open('domains.txt','r')
f = open('temp.txt','r')
def myThread(number):
line = f.readline()
while line:
print(threading.currentThread().getName() + ' http://' + line)
page = http.request('GET','http://'+line)
print(page.status)
time.sleep(0.01)
line = f.readline()
print('Работа потока завершена')
if __name__ == '__main__':
for i in range(5):
my_thread = threading.Thread(target=myThread, args=(i,))
my_thread.start()
temp.txt наподобие Ñакого:
00034.ru00038.ru0003t.ru0004.ru00043.ru000444.ru0004444.ru0005.ru000500.ru00054.ru00055.ru
ÐоÑемÑ-Ñо вÑлезаÑÑ Ð¾Ñибки
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2ea5149a50>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "domains.py", line 14, in myThread
page = http.request('GET','http://'+line)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 68, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0--0------------------------------------------------bibleonline.ru%0a', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2ea5149a50>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2ea7c24790>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "domains.py", line 14, in myThread
page = http.request('GET','http://'+line)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 68, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0--0----------------------------------------------------------0.ru%0a', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2ea7c24790>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2ea5149810>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "domains.py", line 14, in myThread
page = http.request('GET','http://'+line)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 68, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0--0--------------------------all-domain-lists--zonefiles-io-0.ru%0a', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2ea5149810>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2ea7c24ad0>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "domains.py", line 14, in myThread
page = http.request('GET','http://'+line)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 68, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0--0---------------------------------------------------axelname.ru%0a', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2ea7c24ad0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2ea7c24610>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "domains.py", line 14, in myThread
page = http.request('GET','http://'+line)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 68, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 323, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 667, in urlopen
**response_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0--0------------------0----0----------0-------0---------------0.ru%0a', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2ea7c24610>: Failed to establish a new connection: [Errno -2] Name or service not known'))
как бÑÑÑ? |
Python Re.start() & Re.end()
start() & end() These expressions return the indices of the start and end of the substring matched by the group.
These expressions return the indices of the start and end of the substring matched by the group.
Code
>>> import re
>>> m = re.search(r'\d+','1234')
>>> m.end()
4
>>> m.start()
0
Task
You are given a string .
Your task is to find the indices of the start and end of string in .
Input Format
The first line contains the string .
The second line contains the string .
Constraints
Output Format
Print the tuple in this format: (start _index, end _index).
If no match is found, print (-1, -1).
Sample Input
aaadaaaa
Sample Output
(0, 1) (1, 2)(4, 5)
Solution in Python
import re
text, pattern = input(), input()
m= list(re.finditer("(?=(%s))"%pattern,text))
if not m:
print((-1,-1))
for i in m:
print((i.start(1),i.end(1)-1))
|
2020/01/20
Logistic classification은 classification algorithm들 중에서 정확도가 높은 것으로 알려져 있다.
때문에 실제 문제에도 바로 적용해볼 수 있고,Neural Network와 Deep learning을 이해하는 데 중요한 컴포넌트이다.
Logistic Classification에 대해 이야기를 도입하기 앞서, 지난 시간까지의 Linear Regression에 대한 정리를 하고 넘어가자.
x에 대하여 선형적이며우리가 가진 데이터가 어떠한 방식으로 나타날지를 가정하는 일차 함수이다.W에 대한 함수이며가정한 값과 참값 y값의 차의 제곱을 평균을 취한 것이다.W 값을 찾는 것이 목적인데, 이를 찾는 알고리즘이 경사를 내려간다는 의미의W깂에서 그 점에서의 alpha 값은
Linear Regression은 어떠한 숫자를 예측하는 것이었다면,오늘 다룰 Classification은 Binary의 개념 (둘 중 하나를 고르는)이라고 할 수 있다. 예를 들면 이러한 것이다.
아래와 같은 그림과 함께 생각해보자. 어떤 학생이 공부한 시간에 따라서 시험에 합격과 불합격을 분류하도록 학습 모델을 만들려고 할 때, 직관적으로 생각하면 그냥 Linear regression으로도 가능할 것이라는 생각이 들 수 있다.
그러나 여기에는 몇 가지 오류가 존재한다. 기존에 3개의 합격, 3개의 불합격 데이터를 가지고 합,불 여부를 결정짓는 어떤 지점을 찾았다고 가정하면 만약에 50시간을 공부해 합격한 학생이 있다고 했을 때, 이 모델은 결국 모든 데이터를 아울러 선형적으로 결정되기 때문에 그 값이 변할 수 있고, 결국 원래 합격이라고 판단되어야 할 학생이 불합격 기준으로 넘어갈 수밖에 없는 상황이 발생한다.
또 다른 문제는, Classification에서는 반드시 값이 0 또는 1로결정되어야 하는데, Linear regression에서의 Hypothesis는0보다 훨씬 작거나 1보다 훨씬 큰 값이 나올 수가 있게 된다.
따라서 Logistic Classification에는 값의 범위를 0과 1로 제한하는함수가 필요하다. 많은 이들의 연구 끝에 다음과 같은 함수가 이 모델에가장 적합한 모습으로 채택되었다고 한다.기존에 알고 있던 WX를 z로, H(x)를 g(z)로 변환하여Logistic의 가설 함수를 표현한다.
이와 같은 함수를 Logistic function, 혹은 Sigmoid function이라고 부른다. 가로 축 z값이 무한히 커질수록 g(z) 값은 1에수렴하게 되고, z가 무한히 작아지면 0에 수렴하게 된다.
정리하면 Logistic Classification의 가설 함수는 위와 같이 형태가 된다.
기존의 우리의 Cost function은 위와 같은 형태를 띠고 있었다. 이러한 형태의 함수가 가지는 장점은 어느 지점에서 시작하더라도 Cost가 최소가 되는 지점을 반드시 찾을 수 있다는 것이었다.
기존의 Hypothesis를 바탕으로 한 Cost function에서는 이차 방정식의 그래프의 형태를 가지기 때문에 어느 지점에서 시작하든 cost가 최소가 되는 지점을 찾을 수 있었던 데 반해,
Sigmoid function을 가설 함수로 갖는 Logistic classification에 동일한Cost function을 적용하게 되면 그림 우측 하단과 같이 구불구불한 형태를 가지게 된다.이로 인해서 시작하는 지점에 따라서 함수 전체의 최소 값(global minimum)을찾을 수 없게 되고, Local Minimum이라는 특정 부분의 최소 지점에서 멈춰버리게 된다.따라서 Linear에서와 다르게 변화된 Hypothesis에 맞추어 cost function 또한다르게 적용해야 모델이 올바르게 예측할 수 있도록 할 수 있다.
Logistic Classification의 Cost function은 사진의 제목 아래에 보이는 것과같이 y의 값이 0일 때와 1일 때로 나누어서 살펴볼 수 있다.앞서 우리가 세운 새로운 가설 함수에 e 즉, exponential이 포함되어 있어그래프가 구불구불해지는 현상 때문에 그와 상극인 log를 취해주어 부드러운 곡선의형태로 만들어주게 된다. 그리고 이 정의에서 0과 1로 나눈 중요한 포인트는각 경우마다 예측이 성공했을 경우 0의 값을 갖고, 틀렸을 경우 무한대로 수렴하도록하는 로직이 포함되어 있기 때문이다. 이를 c function이라고 이름 짓고,이들의 합을 구해 평균을 취한 것이 우리가 찾고자 하는 cost function이 된다.
위의 유도 과정에서 조건식을 배제하고 한 줄의 수식으로 표현하면 사진과 같다.이는 복잡해보일 수 있지만 사실 y의 값이 0 혹은 1이기 때문에 둘 중 하나를대입하게 되면 한 개의 항은 사라지게 되고, 앞서 살펴본 조건에서와 같은 수식이 나타난다.
Gradient descent 알고리즘의 큰 틀은 Linear regression과 동일하다.처음 Gradient를 다룰 때에는 이 알고리즘의 원리를 이해하기 위해서 미분하는 과정까지설명했지만 사실상 이 단계 이후부터는 그러한 과정은 필요하지 않고사진에서와 같이 Cost function만 잘 세워주고 코드를 작성할 때 제공되는라이브러리를 잘 사용하기만 하면 된다고 한다.
실습에 들어가기에 앞서, 우리가 이론 시간에 학습한 Hypothesis, Cost function,Gradient descent 수식은 위와 같다.
전체 코드를 살펴보면서 하나하나 짚어보도록 하자.
# Lab 5 Logistic Regression Classifier import tensorflow as tf tf.set_random_seed(777) # for reproducibility x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]] y_data = [[0], [0], [0], [1], [1], [1]] # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 2]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([2, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis using sigmoid: tf.div(1., 1. + tf.exp(tf.matmul(X, W))) hypothesis = tf.sigmoid(tf.matmul(X, W) + b) # cost/loss function cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) # 이 위까지 Graph 정의 # Accuracy computation # True if hypothesis>0.5 else False predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) # 아랫부분은 Model을 Train하는 과정 # Launch graph with tf.Session() as sess: # Initialize TensorFlow variables sess.run(tf.global_variables_initializer()) for step in range(10001): cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data}) if step % 200 == 0: print(step, cost_val) # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect (Y): ", c, "\nAccuracy: ", a)
우선 x와 y의 데이터를 명세하는 부분을 살펴보면, x는 x1, x2의 array로 구성되고y는 0과 1, 혹은true와 false의 값을 갖게 된다.이해를 위해 우리가 항상 예제에서 사용하던 예시를 인용하면, 어떤 학생이 x1시간만큼x2개의 동영상 강좌를 통해 학습하였을 때, y의 결과 (pass/fail)을 갖는다고상황을 설정해볼 수 있겠다. 또한 placeholder를 선언하면서 shape을 명세할 때,이전 섹션에서 살펴보았던 Matrix와 관련된 개념이 포함된다.
Logistic Classification의 hypothesis는 Sigmoid function의 형태를 갖기 때문에기존에 우리가 알고 있던 X * W + b와 같은 수식에서 끝나는 것이 아니라 TensorFlow의 내장 함수인tf.sigmoid를 통해 쉽게 표현할 수 있다고 한다. 코드의 주석에서 확인할 수 있듯이,수식으로 직접 표현하려면 tf에 포함된 내장 수학 함수들을 통하여 표현할 수도 있다고 한다.
또한 Cost function의 경우에는 앞서 우리가 도출한 수식을 그대로 코드로 옮겨 적으면 되고,Minimize도 GradientDescentOptimizer를 사용하여 동일하게 작성해주면 된다.
predicted는 예측한 값이 0.5(0과 1사이의 값 중 보통 기준이 되는)와 크기 비교를 하여true와 false가 아닌 type casting을 통해 0 또는 1의 값을 갖게 된다.그리고 accuracy는 Y값과 predicted값이 일치하는지를 마찬가지로0과 1로 표현하여 평균을 취한 값을 갖는다.
위 코드의 실행 결과는 아래와 같다.
0 1.73078 200 0.571512 400 0.507414 600 0.471824 800 0.447585 ... 9200 0.159066 9400 0.15656 9600 0.154132 9800 0.151778 10000 0.149496 Hypothesis: [[ 0.03074029] [ 0.15884677] [ 0.30486736] [ 0.78138196] [ 0.93957496] [ 0.98016882]] Correct (Y): [[ 0.] [ 0.] [ 0.] [ 1.] [ 1.] [ 1.]] Accuracy: 1.0
10000번의 반복에서 매 Step을 지날수록 Cost는 점점 매우 작은 값으로 작아짐을 알 수 있고, 학습에 의한 결과값과 예측값, 정확도까지 코드를 보면서 생각할 수 있었던 대로 결과가 도출됨을 확인할 수 있었다.
이번 실습은 주어진 혈당 수치 데이터가 있고, 이를 바탕으로 어떤 환자의 당뇨병을예측해보는 실습이다. 이번 실습에서는 데이터가 많기 때문에 아래와 같은 데이터를파일 형태로 저장하여 numpy의 loadtext를 매개로 사용한다.
# Lab 5 Logistic Regression Classifier import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility xy = np.loadtxt('data-03-diabetes.csv', delimiter=',', dtype=np.float32) x_data = xy[:, 0:-1] y_data = xy[:, [-1]] print(x_data.shape, y_data.shape) # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 8]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([8, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis using sigmoid: tf.div(1., 1. + tf.exp(-tf.matmul(X, W))) hypothesis = tf.sigmoid(tf.matmul(X, W) + b) # cost/loss function cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) # Accuracy computation # True if hypothesis>0.5 else False predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) # Launch graph with tf.Session() as sess: # Initialize TensorFlow variables sess.run(tf.global_variables_initializer()) for step in range(10001): cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data}) if step % 200 == 0: print(step, cost_val) # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect (Y): ", c, "\nAccuracy: ", a)
Python List 표현식에 따라 x_data는 전체 인스턴스(:)에서 마지막 열을 제외한모든 값(0:-1)를 저장하고, y_data는 마찬가지로 전체 인스턴스를 가져오되마지막 열에만 해당하는 ([-1])값을 취하여 리스트로 저장한다.
shape은 데이터의 크기에 맞게 x는 8개의 변수를 가지므로 8,결과값 y는 1개의 열을 가지므로 1을 지정해준다.
나머지의 경우 실습 1번과 마찬가지로 학습 모델을 작성해준 뒤 결과를 확인해보면 아래와 같다.
0 0.82794 200 0.755181 400 0.726355 600 0.705179 800 0.686631 ... 9600 0.492056 9800 0.491396 10000 0.490767 Hypothesis: ... [0.74610120] [0.79919308] [0.72995949] [0.882917188] Correct (Y): ... [ 1.] [ 1.] [ 1.]] Accuracy: 0.762846
전체 출력이 모두 작성되어있지는 않고, 예측값은 끝부분에 해당하는 결과만을 살펴보면 학습 결과값과 예측값은 모두 정상인데, 정확도가 100%가 아닌 것을 보면 출력되지 않은 부분에서 예측이 틀린 값이 존재했던 것으로 생각해볼 수 있다. |
Conventions:
T1= Time 1 (usually the older dataset)
T2= Time 2 (you guessed it...the newer data)
We'll use blue to represent negative numbers and red to represent positive numbers
Change = T2 - T1
This is the most common way to do topographic change detection (hint - use this one if you're unsure). It works out nicely that decreases in elevation (erosion) are calculated as a negative numbers and increases in elevation (deposition) are calculated as positive numbers.
In the ArcGIS Python Console:
In the ArcGIS Python Console:
from arcpy.sa import *
diff = Raster("Time2.tif") - Raster("Time1.tif")
Change = T1 - T2
By switching the times, we also switch the signs. Positive numbers switch to represent decreases in elevation (erosion) and negative numbers represent increases in elevation (deposition). There are probably good reasons to use this type of change calculation, but I can't think of any off the top of my head...I'll do this by accident sometimes and get really confused. |
blob: bdc90ee62fa741f8a396afd03129139fe20c0965 (
plain
)
# Copyright (C) 2020 Red Hat, Inc. <http://www.redhat.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from random import choice
from glusto.core import Glusto as g
from glustolibs.gluster.brick_ops import reset_brick
from glustolibs.gluster.brick_libs import (get_all_bricks, are_bricks_offline)
from glustolibs.gluster.exceptions import ExecutionError
from glustolibs.gluster.gluster_base_class import GlusterBaseClass, runs_on
from glustolibs.gluster.glusterdir import rmdir
from glustolibs.gluster.glusterfile import remove_file
from glustolibs.gluster.heal_ops import trigger_heal_full
from glustolibs.gluster.heal_libs import monitor_heal_completion
from glustolibs.gluster.lib_utils import collect_bricks_arequal
from glustolibs.gluster.volume_libs import (
get_subvols, wait_for_volume_process_to_be_online)
from glustolibs.misc.misc_libs import upload_scripts
from glustolibs.io.utils import (validate_io_procs, wait_for_io_to_complete)
@runs_on([['replicated', 'distributed-replicated'],
['glusterfs', 'nfs']])
class TestAfrResetBrickHeal(GlusterBaseClass):
@classmethod
def setUpClass(cls):
# Calling GlusterBaseClass setUpClass
cls.get_super_method(cls, 'setUpClass')()
# Upload IO scripts for running IO on mounts
cls.script_upload_path = (
"/usr/share/glustolibs/io/scripts/file_dir_ops.py")
ret = upload_scripts(cls.clients, cls.script_upload_path)
if not ret:
raise ExecutionError("Failed to upload IO scripts to clients {}".
format(cls.clients))
def setUp(self):
# calling GlusterBaseClass setUp
self.get_super_method(self, 'setUp')()
# Setup volume and mount it.
if not self.setup_volume_and_mount_volume(self.mounts):
raise ExecutionError("Failed to Setup_Volume and Mount_Volume")
def tearDown(self):
# Wait if any IOs are pending from the test
if self.all_mounts_procs:
ret = wait_for_io_to_complete(self.all_mounts_procs, self.mounts)
if ret:
raise ExecutionError(
"Wait for IO completion failed on some of the clients")
# Unmount and cleanup the volume
if not self.unmount_volume_and_cleanup_volume(self.mounts):
raise ExecutionError("Unable to unmount and cleanup volume")
# Calling GlusterBaseClass tearDown
self.get_super_method(self, 'tearDown')()
@classmethod
def tearDownClass(cls):
for each_client in cls.clients:
ret = remove_file(each_client, cls.script_upload_path)
if not ret:
raise ExecutionError("Failed to delete file {}".
format(cls.script_upload_path))
cls.get_super_method(cls, 'tearDownClass')()
def test_afr_reset_brick_heal_full(self):
"""
1. Create files/dirs from mount point
2. With IO in progress execute reset-brick start
3. Now format the disk from back-end, using rm -rf <brick path>
4. Execute reset brick commit and check for the brick is online.
5. Issue volume heal using "gluster vol heal <volname> full"
6. Check arequal for all bricks to verify all backend bricks
including the resetted brick have same data
"""
self.all_mounts_procs = []
for count, mount_obj in enumerate(self.mounts):
cmd = ("/usr/bin/env python %s create_deep_dirs_with_files "
"--dirname-start-num %d --dir-depth 3 --dir-length 5 "
"--max-num-of-dirs 5 --num-of-files 5 %s" % (
self.script_upload_path, count,
mount_obj.mountpoint))
proc = g.run_async(mount_obj.client_system, cmd,
user=mount_obj.user)
self.all_mounts_procs.append(proc)
all_bricks = get_all_bricks(self.mnode, self.volname)
self.assertIsNotNone(all_bricks, "Unable to fetch bricks of volume")
brick_to_reset = choice(all_bricks)
# Start reset brick
ret, _, err = reset_brick(self.mnode, self.volname,
src_brick=brick_to_reset, option="start")
self.assertEqual(ret, 0, err)
g.log.info("Reset brick: %s started", brick_to_reset)
# Validate the brick is offline
ret = are_bricks_offline(self.mnode, self.volname, [brick_to_reset])
self.assertTrue(ret, "Brick:{} is still online".format(brick_to_reset))
# rm -rf of the brick directory
node, brick_path = brick_to_reset.split(":")
ret = rmdir(node, brick_path, force=True)
self.assertTrue(ret, "Unable to delete the brick {} on "
"node {}".format(brick_path, node))
# Reset brick commit
ret, _, err = reset_brick(self.mnode, self.volname,
src_brick=brick_to_reset, option="commit")
self.assertEqual(ret, 0, err)
g.log.info("Reset brick committed successfully")
# Check the brick is online
ret = wait_for_volume_process_to_be_online(self.mnode, self.volname)
self.assertTrue(ret, "Few volume processess are offline for the "
"volume: {}".format(self.volname))
# Trigger full heal
ret = trigger_heal_full(self.mnode, self.volname)
self.assertTrue(ret, "Unable to trigger the heal full command")
# Wait for the heal completion
ret = monitor_heal_completion(self.mnode, self.volname)
self.assertTrue(ret, "Heal didn't complete in 20 mins time")
# Validate io on the clients
ret = validate_io_procs(self.all_mounts_procs, self.mounts)
self.assertTrue(ret, "IO failed on the mounts")
self.all_mounts_procs *= 0
# Check arequal of the back-end bricks after heal completion
all_subvols = get_subvols(self.mnode, self.volname)['volume_subvols']
for subvol in all_subvols:
ret, arequal_from_subvol = collect_bricks_arequal(subvol)
self.assertTrue(ret, "Arequal is collected successfully across the"
" bricks in the subvol {}".format(subvol))
self.assertEqual(len(set(arequal_from_subvol)), 1, "Arequal is "
"same on all the bricks in the subvol")
|
Quickstart: Create a virtual network using the Azure portal
In this quickstart, you learn how to create a virtual network using the Azure portal. You deploy two virtual machines (VMs). Next, you securely communicate between VMs and connect to VMs from the internet. A virtual network is the fundamental building block for your private network in Azure. It enables Azure resources, like VMs, to securely communicate with each other and with the internet.
Prerequisites
An Azure account with an active subscription. Create one for free.
Sign in to Azure
Sign in to the Azure portal.
Create a virtual network
From the Azure portal menu, select
Create a resource. From the Azure Marketplace, selectNetworking>Virtual network.
In
Create virtual network, enter or select this information:
Setting
Value
Subscription Select your subscription. Resource group Select Create new, entermyResourceGroup, then selectOK. Name Enter myVirtualNetwork. Location Select East US.
Select
Next: IP Addresses, and forIPv4 address space, enter10.1.0.0/16.
Select
Add subnet, then entermyVirtualSubnetforSubnet nameand10.1.0.0/24forSubnet address range.
Select
Add, then selectReview + create. Leave the rest as default and selectCreate.
In
Create virtual network, selectCreate.
Create virtual machines
Create two VMs in the virtual network:
Create the first VM
From the Azure portal menu, select
Create a resource.
From the Azure Marketplace, select
Compute>Windows Server 2019 Datacenter. SelectCreate.
In
Create a virtual machine - Basics, enter or select this information:
Setting
Value
Project details Subscription Select your subscription. Resource group Select myResourceGroup. You created this resource group in the previous section.Instance details Virtual machine name Enter myVm1. Region Select East US. Availability options Default to No infrastructure redundancy required. Image Default to Windows Server 2019 Datacenter. Size Default to Standard DS1 v2.Administrator account Username Enter a username of your choosing. Password Enter a password of your choosing. The password must be at least 12 characters long and meet the defined complexity requirements. Confirm Password Re-enter password. Inbound port rules Public inbound ports Select Allow selected ports. Select inbound ports Enter HTTP (80)andRDP (3389).Save money Already have a Windows license? Default to No.
Select
Next: Disks.
In
Create a virtual machine - Disks, keep the defaults and selectNext: Networking.
In
Create a virtual machine - Networking, select this information:
Setting
Value
Virtual network Default to myVirtualNetwork. Subnet Default to myVirtualSubnet (10.1.0.0/24). Public IP Default to (new) myVm-ip. NIC network security group Default to Basic. Public inbound ports Default to Allow selected ports. Select inbound ports Default to HTTPandRDP.
Select
Next: Management.
In
Create a virtual machine - Management, forDiagnostics storage account, selectCreate New.
In
Create storage account, enter or select this information:
Setting
Value
Name Enter myvmstorageaccount. If this name is taken, create a unique name. Account kind Default to Storage (general purpose v1). Performance Default to Standard. Replication Default to Locally-redundant storage (LRS).
Select
OK, then selectReview + create. You're taken to theReview + createpage where Azure validates your configuration.
When you see the
Validation passedmessage, selectCreate.
Create the second VM
Repeat the procedure in the previous section to create another virtual machine.
Important
For the Virtual machine name, enter myVm2.
For Diagnosis storage account, make sure you select myvmstorageaccount, instead of creating one.
Connect to a VM from the internet
After you've created myVm1, connect to the internet.
In the Azure portal, search for and select
myVm1.
Select
Connect, thenRDP.
The
Connectpage opens.
Select
Download RDP File. Azure creates a Remote Desktop Protocol (.rdp) file and downloads it to your computer.
Open the RDP file. If prompted, select
Connect.
Enter the username and password you specified when creating the VM.
Note
You may need to select
More choices>Use a different account, to specify the credentials you entered when you created the VM.
Select
OK.
You may receive a certificate warning when you sign in. If you receive a certificate warning, select
YesorContinue.
Once the VM desktop appears, minimize it to go back to your local desktop.
Communicate between VMs
In the Remote Desktop of
myVm1, open PowerShell.
Enter
ping myVm2.
You'll receive a message similar to this output:
Pinging myVm2.0v0zze1s0uiedpvtxz5z0r0cxg.bx.internal.clouda
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 10.1.0.5:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
The
pingfails, becausepinguses the Internet Control Message Protocol (ICMP). By default, ICMP isn't allowed through the Windows firewall.
To allow
myVm2to pingmyVm1in a later step, enter this command:
New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4
This command allows ICMP inbound through the Windows firewall:
Close the remote desktop connection to
myVm1.
Complete the steps in Connect to a VM from the internet again, but connect to
myVm2.
From a command prompt, enter
ping myvm1.
You'll get back something like this message:
Pinging myVm1.0v0zze1s0uiedpvtxz5z0r0cxg.bx.internal.cloudapp.net [10.1.0.4] with 32 bytes of data:
Reply from 10.1.0.4: bytes=32 time=1ms TTL=128
Reply from 10.1.0.4: bytes=32 time<1ms TTL=128
Reply from 10.1.0.4: bytes=32 time<1ms TTL=128
Reply from 10.1.0.4: bytes=32 time<1ms TTL=128
Ping statistics for 10.1.0.4:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 1ms, Average = 0ms
You receive replies from
myVm1, because you allowed ICMP through the Windows firewall on themyVm1VM in step 3.
Close the remote desktop connection to
myVm2.
Clean up resources
In this quickstart, you created a default virtual network and two VMs. You connected to one VM from the internet and securely communicated between the two VMs.
When you're done using the virtual network and the VMs, delete the resource group and all of the resources it contains:
Search for and select
myResourceGroup.
Select
Delete resource group.
Enter
myResourceGroupforTYPE THE RESOURCE GROUP NAMEand selectDelete.
Next steps
To learn more about virtual network settings, see Create, change, or delete a virtual network.
By default, Azure allows secure communication between VMs. Azure only allows inbound remote desktop connections to Windows VMs from the internet. To learn more about types of VM network communications, see Filter network traffic.
Note
Azure services cost money. Azure Cost Management helps you set budgets and configure alerts to keep spending under control. Analyze, manage, and optimize your Azure costs with Cost Management. To learn more, see the quickstart on analyzing your costs. |
I'm trying to install the web server microWebSrv2 on a M5atom lite.
github microWebSrv2
Has anyone done this before without freezing?
Would be grateful for any hint.
I copied the files with ampy to the m5atom lite, here is the file structure on the M5atom lite:
/ConnectWiFi.py
/MicroWebSrv2/init.py
/MicroWebSrv2/httpRequest.py
/MicroWebSrv2/httpResponse.py
/MicroWebSrv2/libs/XAsyncSockets.py
/MicroWebSrv2/libs/XAsyncSocktes.py
/MicroWebSrv2/libs/urlUtils.py
/MicroWebSrv2/microWebSrv2.py
/MicroWebSrv2/mods/PyhtmlTemplate.py
/MicroWebSrv2/mods/WebSockets.py
/MicroWebSrv2/webRoute.py
/SSL-Cert/openhc2.crt
/SSL-Cert/openhc2.key
/boot.py
/img/microWebSrv2.png
/lib/urequests.py
/main.py
/www/favicon.ico
/www/index.html
/www/pdf.png
/www/style.css
/www/test.pyhtml
/www/wschat.html
/www/wstest.html
And run into this error when booting:
Connection successful
('192.168.1.46', '255.255.255.0', '192.168.1.1', '192.168.1.1')
running on M5atom lite
--------------------------- - Python pkg MicroWebSrv2 - - version 2.0.6 - - by JC`zic & HC2 - ---------------------------
I (6206) modsocket: Initializing
[@WebRoute] GET /test-redir
[@WebRoute] GET /test-post (TestPost1/2)
[@WebRoute] POST /test-post (TestPost2/2)
Traceback (most recent call last):
File "main.py", line 153, in <module>
File "MicroWebSrv2/microWebSrv2.py", line 136, in LoadModule
MicroWebSrv2Exception: Cannot load module "WebSockets".
MicroPython v1.13 on 2020-09-02; TinyPICO with ESP32-PICO-D4
Type "help()" for more information.
this is raised by this code in MicroWebSrv2/microWebSrv2.py:
# ------------------------------------------------------------------------
@staticmethod
def LoadModule(modName) :
if not isinstance(modName, str) or len(modName) == 0 :
raise ValueError('"modName" must be a not empty string.')
if modName in MicroWebSrv2._modules :
raise MicroWebSrv2Exception('Module "%s" is already loaded.' % modName)
try :
modPath = MicroWebSrv2.__module__.split('microWebSrv2')[0] \
+ ('mods.%s' % modName)
module = getattr(__import__(modPath).mods, modName)
modClass = getattr(module, modName)
if type(modClass) is not type :
raise Exception
modInstance = modClass()
MicroWebSrv2._modules[modName] = modInstance
return modInstance
except :
raise MicroWebSrv2Exception('Cannot load module "%s".' % modName)
|
Hi guys
In this tutorial, I will guide you on how to detect emotions associated with textual data and how can you apply it in real-world applications.
Understanding emotions associated with text is commonly known as sentiment analysis
You can apply it to perform analysis of customer feedback by directly reading them as either positive or negative feedback instead of manually reading to detect the emotions
There variety of libraries in python which can be used for natural language processing tasks including emotions detection from text including
Natural Language Toolkit (NLTK)
Gensim.
polyglot.
TextBlob.
CoreNLP.
spaCy.
Pattern.
Vocabulary.
Well based on simplicity and ease of getting started I have chosen to go with textblob throughout a tutorial
TextBlob provides a simple API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, and more.
The good thing I like about it is its simplicity in getting started with natural language processing tasks.
In Window
on a window just you use normal pip command
pip install textblobpython -m textblob.download_corpora
In Linux use pip3 command for python3
pip3 install textblob python3 -m textblob.download_corpora
In performing textual analysis using textblob we first have to create a textblob object as shown below
>>>from textblob import TextBlob >>>text = 'I had an awesome day' >>>blob_text = TextBlob(text)
Once you have created a textblob object you can now access tons of textblob methods to manipulate textual data.
For example un tagging part of speech of a text can be as simple as shown below
>>>from textblob import TextBlob >>>text = 'I had an awesome day' >>>blob_text = TextBlob(text) >>>tags = blob_text.tags >>>print(tags) [('I', 'PRP'), ('had', 'VBD'), ('an', 'DT'), ('awesome', 'JJ'), ('day', 'NN')]
In order to perform sentiment analysis using textblob we have to use the sentiment () method as shown below;
>>sentiment = blob_text.sentiment >>>print(sentiment) Sentiment(polarity=1.0, subjectivity=1.0)
As we can see above as we call the sentiment () it returns a Textblob object Sentiment with polarity and subjectivity.
In building an emotion detector, we are more concerned with the polarity, therefore to get exactly polarity from the Sentiment object we have to get it as it’s an attribute
>>>polarity = sentiment.polarity >>>print(polarity) 1.0
Note:
The polarity of the textual data ranges from -1 to 1, where negative polarity indicate negative emotions with -1 as mostly negative and vice versa
Let’s assume we have our app which allows users to provide feedbacks If they like the user experience or not, and then we are going to use textblob to count negative feedbacks and negative feedbacks
from textblob import TextBlob feedbacks = ['I love the app is amazing ', "The experience was bad as hell", "This app is really helpful", "Damn the app tastes like shit ", 'Please don\'t download the app you will regret it '] positive_feedbacks = [] negative_feedbacks = [] for feedback in feedbacks: feedback_polarity = TextBlob(feedback).sentiment.polarity if feedback_polarity>0: positive_feedbacks.append(feedback) continue negative_feedbacks.append(feedback) print('Positive_feebacks Count : {}'.format(len(positive_feedbacks))) print(positive_feedbacks) print('Negative_feedback Count : {}'.format(len(negative_feedbacks))) print(negative_feedbacks)
Output :
Once you run the above code the below results with appear, the script with separate between negative and positive feedback given by the customer automatically as shown below
$ -> python app.pyPositive_feebacks Count: 2['I love the app is amazing ', 'This app is really helpful']Negative_feedback Count : 3['The experience was bad as hell', 'Damn the app tastes like shit ', "Please don't download the app you will regret it "]
Congratulations you performed emotion detection from text using Python, now don’t be shy share it will your fellow friends on twitter, social media groups.
Hope you find this Interesting, In case of anything comment, suggestion, or faced any trouble check it out on the comment box and I will get back to you as fast as I can.
This brief Introduction to building an emotional text analyzer using python through it's natural language processing library TextBlob
Emotion-text-analyzer-
This brief Introduction to building an emotional text analyzer using python through it's natural language processing library TextBlob |
pip install 报错问题解决
有时在用pip install命令安装python库时出现红字报错
1.报错:ReadTimeoutError: HTTPSConnectionPool(host=’pypi.python.org’, port=443): Read timed out.
Downloading xgboost-0.6a2.tar.gz (1.2MB)
48% |███████████████▋ | 583kB 47kB/s eta 0:00:13Exception:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "c:\python27\lib\site-packages\pip\commands\install.py", line 335, in run
wb.build(autobuilding=True)
File "c:\python27\lib\site-packages\pip\wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "c:\python27\lib\site-packages\pip\req\req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "c:\python27\lib\site-packages\pip\req\req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "c:\python27\lib\site-packages\pip\download.py", line 821, in unpack_url
hashes=hashes
File "c:\python27\lib\site-packages\pip\download.py", line 659, in unpack_http_url
hashes)
File "c:\python27\lib\site-packages\pip\download.py", line 882, in _download_http_url
_download_url(resp, link, content_file, hashes)
File "c:\python27\lib\site-packages\pip\download.py", line 603, in _download_url
hashes.check_against_chunks(downloaded_chunks)
File "c:\python27\lib\site-packages\pip\utils\hashes.py", line 46, in check_against_chunks
for chunk in chunks:
File "c:\python27\lib\site-packages\pip\download.py", line 571, in written_chunks
for chunk in chunks:
File "c:\python27\lib\site-packages\pip\utils\ui.py", line 139, in iter
for x in it:
File "c:\python27\lib\site-packages\pip\download.py", line 560, in resp_read
decode_content=False):
File "c:\python27\lib\site-packages\pip\_vendor\requests\packages\urllib3\response.py", line 357, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "c:\python27\lib\site-packages\pip\_vendor\requests\packages\urllib3\response.py", line 324, in read
flush_decoder = True
File "c:\python27\lib\contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "c:\python27\lib\site-packages\pip\_vendor\requests\packages\urllib3\response.py", line 246, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443): Read timed out.
错误原因:连不上pip的源,下载依赖包失败
解决1(推荐):在pip安装所在文件夹路径下,创造python文件(.py)
import os
ini="""[global]
index-url = https://pypi.doubanio.com/simple/
[install]
trusted-host=pypi.doubanio.com
"""
pippath=os.environ["USERPROFILE"]+"\\pip\\"
if not os.path.exists(pippath):
os.mkdir(pippath)
with open(pippath+"pip.ini","w+") as f:
f.write(ini)
在cmd上运行这个.py文件即可
之后再用pip install安装指令下载速度会非常快
解决2:修改加大超时时间
pip --default-timeout=100 install -U pip
如下指令安装
pip --default-timeout=100 install -U scrapy(库名)
解决3:到https://pypi.python.org/simple/pip/下载相对应的.whl文件
下载完之后,在用pip安装:
pip install (path)/pip-8.1.2-py2.py3-none-any.whl
2.报错:PermissionError: [WinError 5] 拒绝访问。: ‘c:\program files\python35\Lib\site-packages\xlwt’
Exception:
Traceback (most recent call last):
File "c:\program files\python35\lib\site-packages\pip\basecommand.py", line 21
1, in main
status = self.run(options, args)
File "c:\program files\python35\lib\site-packages\pip\commands\install.py", li
ne 311, in run
root=options.root_path,
File "c:\program files\python35\lib\site-packages\pip\req\req_set.py", line 64
6, in install
**kwargs
File "c:\program files\python35\lib\site-packages\pip\req\req_install.py", lin
e 803, in install
self.move_wheel_files(self.source_dir, root=root)
File "c:\program files\python35\lib\site-packages\pip\req\req_install.py", lin
e 998, in move_wheel_files
isolated=self.isolated,
File "c:\program files\python35\lib\site-packages\pip\wheel.py", line 339, in
move_wheel_files
clobber(source, lib_dir, True)
File "c:\program files\python35\lib\site-packages\pip\wheel.py", line 310, in
clobber
ensure_dir(destdir)
File "c:\program files\python35\lib\site-packages\pip\utils\__init__.py", line
71, in ensure_dir
os.makedirs(path)
File "c:\program files\python35\lib\os.py", line 241, in makedirs
mkdir(name, mode)
PermissionError: [WinError 5] 拒绝访问。: 'c:\\program files\\python35\\Lib\\sit
e-packages\\xlwt'
解决:
点击python27文件夹->右键属性->安全->编辑->完全控制->允许->保存
3.报错:UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xd3 in position 7: ordinal not in range(128)
Exception:
Traceback (most recent call last):
File “c:\python27\lib\site-packages\pip\basecommand.py”, line 215, in main
status = self.run(options, args)
File “c:\python27\lib\site-packages\pip\commands\install.py”, line 324, in run
requirement_set.prepare_files(finder)
File “c:\python27\lib\site-packages\pip\req\req_set.py”, line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File “c:\python27\lib\site-packages\pip\req\req_set.py”, line 620, in _prepare_file
session=self.session, hashes=hashes)
File “c:\python27\lib\site-packages\pip\download.py”, line 821, in unpack_url
hashes=hashes
File “c:\python27\lib\site-packages\pip\download.py”, line 659, in unpack_http_url
hashes)
File “c:\python27\lib\site-packages\pip\download.py”, line 880, in _download_http_url
file_path = os.path.join(temp_dir, filename)
File “c:\python27\lib\ntpath.py”, line 85, in join
result_path = result_path + p_path
UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xd3 in position 7: ordinal not in range(128)
报错原因:pip安装Python包会加载目录包含中文字符,ascii不能编码
解决:python目录 Python27\Lib\site-packages 建一个文件sitecustomize.py
import sys
sys.setdefaultencoding('gbk')
|
Als umweltbewusster oder zumindest geiziger Mitarbeiter des Öffentlichen Dienstes verzichtet man in der Regel bei längerer Anfahrt zum Arbeitsplatz auf den privaten PKW und nimmt freudig am öffentlichen Personennahverkehr (ÖPNV) teil. Sprich: Die Deutsche Bahn (und in Köln auch die KVB) ist unser Freund! Gerüchteweise sind die bereitgestellten Verkehrsmittel nicht immer dann vor Ort, wenn man es laut Fahrplan erwarten könnte. Damit man die daraus resultierende Wartezeit nicht am Bahnsteig, sondern am Frühstückstisch bzw. im bequemen Bürosessel verbringen kann, sind aktuelle Informationen über die Verspätungen unerlässlich.
Nun hat sich in der Vergangenheit der Service der DB dahingehend deutlich verbessert. So sind die Verspätungsinformationen inzwischen minutengenau und in Realzeit sowohl im Web als auch mittels der App „DB Navigator“ abrufbar. Der o.g. Mitarbeiter des Ö.D. ist allerdings nicht nur geizig (jaja, und umweltbewusst), sondern auch klickfaul und noch dazu ein Spielkind. So kam ich auf die Idee, sowohl in meinem trauten Heim als auch im Büro mittels ohnehin vorhandener Technik einen (für mich) optimalen Anzeigebildschirm zu basteln.
Dieser sollte nicht nur die aktuellen Verspätungen meiner Zugverbindungen, sondern auch weitere interessante Informationen anzeigen, genauer gesagt: Aktuelle Nachrichten, Wettervorhersage und (zuhause) zusätzlich das Kamerabild einer per WLAN verbundenen IP-Kamera. Als Hardware kamen ein günstiger und dank Notebook-Anschaffung ohnehin kaum noch gebrauchter PC-Bildschirm sowie zeitgemäß ein Raspberry Pi zum Einsatz. Das System sollte in jedem Fall ohne weitere Peripherie, speziell ohne Maus und Tastatur, auskommen. Softwareseitig setzte ich daher auf Google Chrome im Kiosk-Modus. Mittels der Erweiterung „Easy Auto Refresh“ kann man dafür sorgen, dass Chrome die angezeigte Seite automatisch einmal pro Minute neu lädt. Das Kamerabild läuft ohnehin im Streaming-Mode.
Der graphische Desktop des Raspi musste so eingestellt werden, dass er sich nicht automatisch abschaltet. Die Kontrolle über die Anzeige sollte ausschließlich per Ein/Aus-Knopf des Monitors ablaufen. Dies erreicht man über die eine Einstellung in LightDM.
Da ich mir die Installation und Konfiguration eines Webservers sparen wollte, verwende ich eine einfache lokale HTML-Seite auf dem Raspi. Die beiden gewünschten Elemente „Aktuelle Nachrichten“ und „Wettervorhersage“ sind sehr leicht über passende Widgets realisierbar. Ich habe hierzu die Angebote von wetterdienst.de und rp-online genutzt, es gibt jedoch zahlreiche weitere Anbieter.
Richtig interessant wurde es dann bei der Einbindung der Verspätungsanzeige. Wie ich feststellen musste, bietet die Bahn leider keine geeignete API zu diesem Zweck. Mir blieb nichts anderes übrig als die entsprechende Webseite zu parsen. Diese Erkenntnis war die Geburtsstunde von Projekt „Mepevea“ (MEin PErsönlicher VErspätungsAnzeiger).
Wie erwähnt wollte ich auf die Installation und den Betrieb eines Webservers verzichten. Die Anzeige soll ja ohnehin nur für mich persönlich laufen. Daher musste ich die eigentliche Logik nebst Parser in ein Pythonskript packen, welches per Cronjob aufgerufen wird (ja, ich arbeite unter Linux und ignoriere Windows seit Jahren – die Portierung sollte aber kein großes Problem darstellen). Als Basismodul für den Parser dient natürlich „BeautifulSoup“, darüber hinaus werden urllib zum Abruf der Seite und einige weitere Module benötigt. Der Start lautet also:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import bs4, urllib2, time, fileinput, sys, urllib
„fileinput“ verwende ich, um später den <div>-Block im HTML durch die korrekten Daten auszutauschen, z.B.:
for line in fileinput.FileInput("/home/pi/anzeige/bahnlinks.html",inplace=1):
if line.startswith('<div id="bahn">'):
line = text
sys.stdout.write(line)
Natürlich macht es Sinn, abhängig vom Wochentag und der Tageszeit die Anzeige zu variieren (Hinfahrt, Rückfahrt, Abend/Wochenende), also z.B.:
timestamp = time.localtime(time.time())
if timestamp[6] > 4:
textlist.append("<b>Bahnanzeige erst am Montag wieder! Schönes Wochenende!</b>")
Hier wird schon klar: Individuelle Anpassung ist unerlässlich und ich kann die Beispiele nur anreißen. Keine Sorge: Am Ende werde ich als „großes Beispiel“ mein komplettes Skript bereitstellen.
Zentrales Element des Skriptes ist die Parserfunktion. Sie erhält als Parameter die URL der Bahn (dazu später) und jagt sie durch BeautifulSoup:
def parser(url):
page = urllib2.urlopen(url).read()
soup = bs4.BeautifulSoup(page)
Man möge mir an dieser Stelle glauben, dass wir die spannenden Inhalte erhalten, wenn wir nach den Keywords, genauer gesagt den <td>-Klassen „overview timelink“ und „overview tprt“ suchen:
zeilen = soup.find_all('td', {"class" : "overview timelink"})
verspaetungen = soup.find_all('td', {"class" : "overview tprt"})
Schon hier erkannt man, wo das größte Problem unserer schönen Bastelei liegt: Sollte die Bahn die Klassennamen aus irgendwelchen Gründen ändern, funktioniert natürlich nichts mehr. Das gleiche gilt für die URLs und die HTML-Struktur. Genau aus diesem Grund gibt es ja i.d.R. kapselnde APIs, aber die stehen hier wie gesagt nicht zur Verfügung.
Standardmäßig erhält man von der Bahn die nächsten drei Züge ab dem definierten Zeitpunkt. Ich habe die finale Version noch so erweitert, dass man dies variieren kann, aber das würde hier zu weit führen. Ebenso müsste ich nun eigentlich auf die Details von BeautifulSoup eingehen, um den folgenden Codeblock zu erläutern. Aber auch dies möchte ich mir sparen und auf die gute Online-Dokumentation des Moduls verweisen. Unsere Verbindungen sowie die aktuellen Verspätungen erhalten wir so:
parsedtext = ''
zaehler = 0
for zeile in zeilen:
for zelle in zeile.children:
parsedtext += zelle.contents[0]
parsedtext += '<span style="color: red;">'
for verspaetung in verspaetungen[zaehler].children:
if str(verspaetungen[zaehler]).count("okmsg") > 1 or str(verspaetungen[zaehler]).count("red") > 1:
parsedtext += verspaetung.contents[0]
break
parsedtext += '</span>'
zaehler += 1
Ich bin mir zu 99% sicher, dass dies nicht die eleganteste Version ist, um die Informationen zu erhalten und aufzubereiten. Aber sie funktioniert. Wer das Ganze kürzer, schöner und verständlicher hinbekommt, ohne dass die Funktionalität leidet, möge sich bei mir melden.
Kommen wir nun zu den benötigten URLs. In einer ersten Version hatte ich pro Zug eine URL auf Basis des Bahntools „query2.exe“ verwendet, die auch deutlich einfacher zu parsen war (Anmerkung: Bitte von der Endung „.exe“ nicht täuschen lassen: Es handelt sich um einen Webservice, nicht um ein lokales Programm.). Leider musste ich feststellen, dass die Bahn bei jeder (geplanten) Mini-Fahrplanänderung die URL komplett verändert. Auf Dauer war das also leider keine Lösung. Stattdessen verwende ich nun die „Vorstufe“ namens „query.exe“. Diese hat klar definierte und – hoffentlich – dauerhaft beständige Parameter. Als Parameter benötigen wir den Code des Startbahnhofs, den Code des Zielbahnhofs und die Startzeit.
Während die Startzeit natürlich jedem selbst überlassen bleibt und einfach in der Form hh:mm verwendet wird, muss man sich die Codes (sog. IBNR) der Bahnhöfe einmalig heraussuchen. Dies geht zum Glück sehr einfach mittels einer Onlinesuche.
Lautet die IBNR des Startbahnhofs bspw. 8000208, die des Zielbahnhofs 8000133 und die gewünschte Startzeit ist 17:00 Uhr, lautet die gesuchte URL:
http://reiseauskunft.bahn.de/bin/query.exe/dox?S=8000208&Z=8000133&time=17:00&start=1
Damit lässt sich nun für jede beliebige Verbindung und Kombination von Tageszeiten ein passender Anzeiger (eben ein „Mepevea“) bauen.
Für weitere Ideen, Verbesserungsvorschläge etc. bin ich jederzeit dankbar. Und wenn jemand die Bahn überreden könnte, doch mal eine entsprechende API bereitzustellen, das wäre ein Traum. 😉
Wie versprochen: Den vollständigen Text des Skriptes sowie eine Beispiel-HTML-Seite findet man unter http://dl.distinguish.de/mepevea.zip |
tensor可以被简单的理解为多维数组,但实际上,在tensorflow中,tensor本身不会存储具体数值,它只是对运算结果的引用。
tensor的另一个使用场景是当计算图构造完成之后,张量可以在会话中获得运算结果。
tensor有三个属性,分别是name、shape以及type
x1 = tf.constant(1.0, shape=[]) # 0D 标量 shape=()
x2 = tf.constant(1.0, shape=[1]) # 1D 一维向量 shape=(n,)
x3 = tf.constant(1.0, shape=[1,1]) # 2D 二维数组 shape=(m,n)
x4 = tf.random_normal(shape=(5, )) # 1D 一维向量 shape=[5]
a1 = tf.nn.tanh(z1, name='output') # 激活函数, 也是一个张量
print(x4)
============================================== output ==============================================
Tensor("random_num:0", shape=(5,), dtype=float32)
op也可以叫节点、操作或opreation,它的输入和输出都是tensor
train_step = optimizer.minimize(loss, name='train_step')
print(train_step)
============================================== output ==============================================
name: "train_step"
op: "NoOp"
input: "^train_step/update_w1/ApplyGradientDescent"
input: "^train_step/update_Variable/ApplyGradientDescent"
input: "^train_step/update_Variable_1/ApplyGradientDescent"
input: "^train_step/update_Variable_2/ApplyGradientDescent"
Tensorflow的计算一般分为两个部分:第一阶段定义计算图中的所有计算,第二阶段执行计算,本文第4部分再介绍。
Tensorflow中的每一个计算都是计算图上的一个节点,节点之间的边描述了计算之间的关系。不同的计算图上的张量和运算都不会共享,tf.get_default_graph()可以用来获取当前的计算图。graph和tensor还有op的关系可以用下图来解释:
x和y分别是引用constant的张量。add是一个节点,输入正是x和y,输出z,z也是一个tensor,这整个结构就是一张graph
Tensorflow使用Session来执行定义好的计算图。
高级用法后续补充,包括指定图、GPU/CPU等
常量定义后值和维度不可变,变量定义后值可变而维度不可变。相对于张量,变量的主要特性是可以保存,所以对于需要训练的参数,一般定义为变量。变量使用前一定要初始化:
sess.run(tf.global_variables_initializer())
变量、常量和张量打印出来如下:
a = tf.Variable(tf.random_normal([2]))
b = tf.constant([5.0,2.5])
c = tf.random_normal([2])
d = a+b+c
print(a,'\n',b,'\n',c,'\n',d)
============================================== output ==============================================
<tf.Variable 'Variable_10:0' shape=(2,) dtype=float32_ref>
Tensor("Const_5:0", shape=(2,), dtype=float32)
Tensor("random_normal_15:0", shape=(2,), dtype=float32)
Tensor("add_6:0", shape=(2,), dtype=float32)
从上面的输出结果可以看出:常量就是张量、而张量和变量是可以直接结算的,只要dtype是一样的,注意tf.float64和tf.float32是不同的数据类型 |
Centos上安装RabbitMQ并且Python测试
首先,把erlang环境安装一下,直接
yum list | grep erlang
erlang.x86_64 R16B-03.16.el7 epel
会找到上面这个库,直接安装就可以了。
yum install erlang.x86_64
安装好erlang环境,再下载rabbitmq包。
为了方便,直接下载rpm包,rpm包地址是
下载好以后,直接安装、启动
rpm -ivh rabbitmq-server-3.6.1-1.noarch.rpm
service rabbitmq-server start
启动好以后,可以看看当前有哪些用户
[root@iZ94mr3pnsgZ download]# rabbitmqctl list_usersListing users ...guest [administrator]
只有一个guest,要不再加一个吧。
[root@iZ94mr3pnsgZ download]# rabbitmqctl add_user yueer01 password
好了,现在有第二个用户 yueer01,密码自己设置一个。但是这个yueer01没有权限,
[root@iZ94mr3pnsgZ download]# rabbitmqctl list_usersListing users ...yueer01 []guest [administrator]
再给它加一个权限,目前rabbitmq提供4种权限。
none
不能访问 management plugin
management
用户可以通过AMQP做的任何事外加:
列出自己可以通过AMQP登入的virtual hosts
查看自己的virtual hosts中的queues, exchanges 和 bindings
查看和关闭自己的channels 和 connections
查看有关自己的virtual hosts的“全局”的统计信息,包含其他用户在这些virtual hosts中的活动。
policymaker
management可以做的任何事外加:
查看、创建和删除自己的virtual hosts所属的policies和parameters
monitoring
management可以做的任何事外加:
列出所有virtual hosts,包括他们不能登录的virtual hosts
查看其他用户的connections和channels
查看节点级别的数据如clustering和memory使用情况
查看真正的关于所有virtual hosts的全局的统计信息
administrator
policymaker和monitoring可以做的任何事外加:
创建和删除virtual hosts
查看、创建和删除users
查看创建和删除permissions
关闭其他用户的connections
为了省事,我们设置为administrator吧。
[root@iZ94mr3pnsgZ download]# rabbitmqctl set_user_tags yueer01 administrator
Setting tags for user "yueer01" to [administrator] ...
[root@iZ94mr3pnsgZ download]# rabbitmqctl list_users
Listing users ...
yueer01 [administrator]
guest [administrator]
都是管理员了,但老是命令行,还是不够清晰,rabbitmq为我们提供了一套非常容易的web管理,只要一条命令就可以启动了。
[root@iZ94mr3pnsgZ download]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
打开浏览器,输入id地址和端口,看看到底是什么。。。
http://your_ip_address:15672/
输入刚才申请的用户名密码,yueer01 password
进入web页面。看我下面的:
这就可以了,至于详细内容,可以自己去研究。
好了,至此,整个安装过程已经完毕,那我们用python代码在本地运行一下,看看会发生什么情况。
从rabbitmq官方找它的例子,修改一下,例子如下:
# coding:utf-8
import pika
username = 'yueer01'
password = 'password'
host = '10.10.10.10'
credentials = pika.PlainCredentials(username, password)
connection = pika.BlockingConnection(pika.ConnectionParameters(host=host, credentials=credentials, port=5672))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print "[x] Sent 'Hello World!'"
connection.close()
呀!居然报错了,看看错误原因
pika.exceptions.ProbableAccessDeniedError
被拒绝了,一般任何网络服务,对于权限都有设置的,比如mysql, redis,允许远程访问的时候,都需要自己配置的,所以我们也要把rabbitmq设置远程访问功能。
原来我们还少设置了permissions,直接打开网页,切换到admin目录下面,可以看到下面的截图
No access,你可以直接点击用户名,设置一个默认的,也可以用命令设置
命令设置权限方式为:
[root@iZ94mr3pnsgZ rabbitmq]# rabbitmqctl set_permissions -p '/' yueer01 ".*" ".*" ".*"
Setting permissions for user "yueer01" in vhost "/" ...
再截图,就可以发现明显变化了
再次运行脚本,结果完美
Sent 'Hello World!'Process finished with exit code 0
这时候,你可以在网页的图表中,观察各种数据,整个过程非常清晰。
下面再写一个接受的脚本,还是从官网找个例子,修改一下,代码如下:
# coding:utf-8
import pika
username = 'yueer01'
password = 'password'
host = 'your_ip_address'
credentials = pika.PlainCredentials(username, password)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host=host, credentials=credentials, port=5672
))
channel = connection.channel()
channel.queue_declare(queue='Hello')
def callback(ch, method, properties, body):
print "[x] Received %r" % body
channel.basic_consume(callback, queue='hello', no_ack=True)
print '[*] Waiting for messages. To exit press CTRL+C'
channel.start_consuming()
运行一下,就可以看到结果了。。
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'Hello World!'
[x] Received 'Hello World!'
我刚才运行了2次publish,所以接受到2个数据包。
至此,rabbitmq的基本应用已经基本完成。以后我们还会讲到怎么在twisted里面接受和发送数据,异步执行rabbitmq,还是蛮有意思的。
学习来源
文章来源于互联网,如有雷同请联系站长删除:Centos上安装RabbitMQ并且Python测试 |
Joblib provides a simple helper class to write parallel for loops using multiprocessing. The core idea is to write the code to be executed as a generator expression, and convert it to parallel computing:
>>> from math import sqrt >>> [sqrt(i ** 2) for i in range(10)] [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
can be spread over 2 CPUs using the following:
>>> from math import sqrt >>> from joblib import Parallel, delayed >>> Parallel(n_jobs=2)(delayed(sqrt)(i ** 2) for i in range(10)) [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
Under the hood, the Parallel object create a multiprocessing
poolthat forks the Python interpreter in multiple processes to execute each of the items of the list. Thedelayedfunction is a simple trick to be able to create a tuple(function, args, kwargs)with a function-call syntax. Some algorithms require to make several consecutive calls to a parallel function interleaved with processing of the intermediate results. Calling Parallel several times in a loop is sub-optimal because it will create and destroy a pool of workers … several times which can cause a significant overhead. For this case it is more efficient to use the context manager API of the Parallel class to re-use the same pool of workers for several calls to the Parallel object:
>>> with Parallel(n_jobs=2) as parallel: ... accumulator = 0. ... n_iter = 0 ... while accumulator < 1000: ... results = parallel(delayed(sqrt)(accumulator + i ** 2) ... for i in range(5)) ... accumulator += sum(results) # synchronization barrier ... n_iter += 1 ... >>> (accumulator, n_iter) (1136.596..., 14)
Via Joblib. |
http://opus-two.myshopify.com/products/ ... uation-kit
To use it with the embedded Project Oberon SPI SD card driver you will need to make the following pin assignments for the Pmod header JA in the Arty.xdc file:
Code: Select all
set_property -dict { PACKAGE_PIN G13 IOSTANDARD LVCMOS33 } [get_ports { SS[0] }]; #IO_0_15 Sch=ja[1]
set_property -dict { PACKAGE_PIN B11 IOSTANDARD LVCMOS33 } [get_ports { MOSI[0] }]; #IO_L4P_T0_15 Sch=ja[2]
set_property -dict { PACKAGE_PIN A11 IOSTANDARD LVCMOS33 } [get_ports { MISO[0] }]; #IO_L4N_T0_15 Sch=ja[3]
set_property -dict { PACKAGE_PIN D12 IOSTANDARD LVCMOS33 } [get_ports { SCLK[0] }]; #IO_L6P_T0_15 Sch=ja[4]
|
linux_if
Name
linux_if - munin plugin monitoring Linux network interfaces
Description
This is not a wildcard plugin. Monitored interfaces are controlled by ‘include’, ‘exclude’ in config. By default, only statically configured interfaces (and their sub-interfaces) are monitored.
Features:
bonding - group bonding slave interfaces with master
vlans - group vlan sub-interfaces with main (dot1q trunk) interface
Configuration
[linux_if]
# run plugin as root (required if you have VLAN sub-interfaces)
user = root
# comma separated list of interface patterns to exclude from monitoring
# default: lo
# example:
env.exclude = lo,vnet*
# comma separated list of interface patterns to include in monitoring
# default: (empty)
# example:
env.include = br_*
# should statically configured interfaces be included (they have ifcfg-* file)
# default: true
env.include_configured_if = true
Include/exclude logic in detail. Interface name is matched according to the following rules:
if matched by any exclude pattern, then exclude. Otherwise next step.
if matched by any include pattern, then include, Otherwise next step.
if ‘include_configured_if’ is true and ‘ifcfg-*’ file exists then include
default is not to include interface in monitoring
automatically include sub-interface, if the parent interface is monitored
Tested on: RHEL 6.x and clones (with Python 2.6)
Todo
implement ‘data loaning’ between graphs, removes duplicit measures
add support for bridging
configurable graph max based on interface speed
Magic Markers
#%# family=manual
#!/usr/bin/env python
"""
=head1 NAME
linux_if - munin plugin monitoring Linux network interfaces
=head1 DESCRIPTION
This is not a wildcard plugin. Monitored interfaces are controlled
by 'include', 'exclude' in config. By default, only statically
configured interfaces (and their sub-interfaces) are monitored.
Features:
=over
=item bonding - group bonding slave interfaces with master
=item vlans - group vlan sub-interfaces with main (dot1q trunk) interface
=back
=head1 CONFIGURATION
[linux_if]
# run plugin as root (required if you have VLAN sub-interfaces)
user = root
# comma separated list of interface patterns to exclude from monitoring
# default: lo
# example:
env.exclude = lo,vnet*
# comma separated list of interface patterns to include in monitoring
# default: (empty)
# example:
env.include = br_*
# should statically configured interfaces be included (they have ifcfg-* file)
# default: true
env.include_configured_if = true
Include/exclude logic in detail. Interface name is matched according to the following rules:
=over 4
=item 1. if matched by any exclude pattern, then exclude. Otherwise next step.
=item 2. if matched by any include pattern, then include, Otherwise next step.
=item 3. if 'include_configured_if' is true and 'ifcfg-*' file exists then include
=item 4. default is not to include interface in monitoring
=item 5. automatically include sub-interface, if the parent interface is monitored
=back
Tested on: RHEL 6.x and clones (with Python 2.6)
=head1 TODO
=over 4
=item implement 'data loaning' between graphs, removes duplicit measures
=item add support for bridging
=item configurable graph max based on interface speed
=back
=head1 MAGIC MARKERS
#%# family=manual
=cut
"""
__author__ = 'Brano Zarnovican'
__email__ = 'zarnovican@gmail.com'
__license__ = 'BSD'
__version__ = '0.9'
import fnmatch, os, sys
#from pprint import pprint
#
# handle 'autoconf' option
#
if len(sys.argv) > 1 and sys.argv[1] == 'autoconf':
if os.path.exists('/proc/net/dev'):
print('yes')
sys.exit(0)
else:
print('no')
sys.exit(1)
#
# plugin configuration
#
exclude_patterns = os.environ.get('exclude', 'lo').split(',')
include_patterns = os.environ.get('include', '').split(',')
include_configured_if = os.environ.get('include_configured_if', 'true').lower()
def interface_is_enabled(ifname):
"""logic to include or exclude this interface in plugin based on configuration"""
if any(fnmatch.fnmatch(ifname, pattern) for pattern in exclude_patterns):
return False
if any(fnmatch.fnmatch(ifname, pattern) for pattern in include_patterns):
return True
if include_configured_if == 'true' and \
os.path.exists('/etc/sysconfig/network-scripts/ifcfg-'+ifname):
return True
return False
#
# read counts for all interfaces (for both 'update' or 'config')
#
interface = {} # interface[name][measure] = value
try:
fieldnames = ('rxbytes', 'rxpackets', 'rxerrs', 'rxdrop', 'rxfifo', 'rxframe', 'rxcompressed', 'rxmulticast') +\
('txbytes', 'txpackets', 'txerrs', 'txdrop', 'txfifo', 'txcolls', 'txcarrier', 'txcompressed')
with open('/proc/net/dev') as f:
f.readline() # skip 2-line header
f.readline()
for line in f:
l = line.replace('|', ' ').replace(':', ' ').split()
ifname = l[0].strip(':')
assert len(l) == 17, 'Unexpected number of fields (%d)' % len(l)
interface[ifname] = dict(zip(fieldnames, l[1:]))
interface[ifname]['name'] = ifname
interface[ifname]['sname'] = ifname.replace('.', '_') # sanitized interface name
except IOError as e:
print(e)
sys.exit(-1)
#
# associate slave interfaces to their bond masters
#
bond = {} # bond[bondname][slavename][measure] = value
try:
with open('/sys/class/net/bonding_masters') as f:
bond_list = f.read().split()
for bondname in bond_list:
if bondname not in interface: continue
if not interface_is_enabled(bondname): continue
bond[bondname] = { 'subifs': [], }
bond[bondname]['parent'] = interface[bondname]
with open('/sys/class/net/'+bondname+'/bonding/slaves') as f:
slave_list = f.read().split()
for slave in slave_list:
if slave not in interface: continue
bond[bondname]['subifs'].append(slave)
bond[bondname][slave] = interface[slave]
del interface[slave]
except IOError:
pass # bonding not configured
#
# associate VLAN sub-interfaces to their trunks
#
trunk = {} # trunk[trunkname][subifname][measure] = value
try:
with open('/proc/net/vlan/config') as f:
f.readline()
f.readline()
for line in f:
(subif, vlanid, trunkif) = line.replace('|', ' ').split()
if trunkif not in interface: continue
if subif not in interface: continue
if not interface_is_enabled(trunkif): continue
if trunkif not in trunk:
trunk[trunkif] = { 'subifs': [], }
trunk[trunkif]['parent'] = interface[trunkif]
trunk[trunkif]['subifs'].append(subif)
trunk[trunkif][subif] = interface[subif]
del interface[subif]
except IOError:
pass # vlans not configured (or not running as root)
#
# all remaining interfaces are considered 'plain'
#
plain = {} # plain[ifname][measure] = value
for (ifname, counts) in interface.items():
if ifname in bond or ifname in trunk: continue
if not interface_is_enabled(ifname): continue
plain[ifname] = counts
#
# now, do the actual stdout output..
#
in_config = (len(sys.argv) > 1 and sys.argv[1] == 'config')
def graph_interface_traffic(data):
if in_config:
print("""graph_title {name} traffic
graph_order down up
graph_args --base 1000 --lower-limit 0
graph_vlabel bits in (-) / out (+) per ${{graph_period}}
graph_category network
down.label received
down.type DERIVE
down.graph no
down.cdef down,8,*
down.min 0
up.label bps
up.type DERIVE
up.negative down
up.cdef up,8,*
up.min 0""".format(**data))
else:
print("""down.value {rxbytes}
up.value {txbytes}""".format(**data))
print('')
def graph_interface_errors(data):
if in_config:
print("""graph_title {name} errors
graph_args --base 1000 --lower-limit 0
graph_vlabel counts RX (-) / TX (+) per ${{graph_period}}
graph_category network
rxerrs.label errors
rxerrs.type COUNTER
rxerrs.graph no
txerrs.label errors
txerrs.type COUNTER
txerrs.negative rxerrs
rxdrop.label drops
rxdrop.type COUNTER
rxdrop.graph no
txdrop.label drops
txdrop.type COUNTER
txdrop.negative rxdrop
txcolls.label collisions
txcolls.type COUNTER""".format(**data))
else:
print("""rxerrs.value {rxerrs}
txerrs.value {txerrs}
rxdrop.value {rxdrop}
txdrop.value {txdrop}
txcolls.value {txcolls}""".format(**data))
print('')
def graph_traffic_with_subifs(ddata, title):
if in_config:
print('graph_title ' + title)
print("""graph_args --base 1000 --lower-limit 0
graph_vlabel bits in (-) / out (+) per ${graph_period}
graph_category network""")
for ifname in ddata['subifs'] + ['parent',]:
data = d[ifname]
if ifname == 'parent':
label = 'total'
drawtype = 'LINE1'
else:
label = data['name']
drawtype = 'AREASTACK'
if in_config:
print("""{sname}_down.label {label}
{sname}_down.type DERIVE
{sname}_down.graph no
{sname}_down.cdef {sname}_down,8,*
{sname}_down.min 0
{sname}_up.label {label}
{sname}_up.type DERIVE
{sname}_up.negative {sname}_down
{sname}_up.cdef {sname}_up,8,*
{sname}_up.draw {drawtype}
{sname}_up.min 0""".format(label=label, drawtype=drawtype, **data))
if ifname == 'parent':
print('{sname}_up.colour 000000'.format(**data))
else:
print("""{sname}_down.value {rxbytes}
{sname}_up.value {txbytes}""".format(**data))
print('')
for d in plain.values():
print('multigraph interface_{sname}_traffic'.format(**d))
graph_interface_traffic(d)
print('multigraph interface_{sname}_errors'.format(**d))
graph_interface_errors(d)
for d in bond.values():
parent = d['parent']
print('multigraph bond_{sname}_traffic'.format(**parent))
graph_traffic_with_subifs(d, title='{0} traffic (stacked)'.format(parent['name']))
print('multigraph bond_{sname}_errors'.format(**parent))
graph_interface_errors(parent)
for ifname in d['subifs']:
if_data = d[ifname]
print('multigraph bond_{0}_traffic.{1}'.format(parent['sname'], if_data['sname']))
graph_interface_traffic(if_data)
print('multigraph bond_{0}_errors.{1}'.format(parent['sname'], if_data['sname']))
graph_interface_errors(if_data)
for d in trunk.values():
parent = d['parent']
print('multigraph trunk_{sname}_traffic'.format(**parent))
graph_traffic_with_subifs(d, title='{0} trunk (stacked)'.format(parent['name']))
for ifname in d['subifs']:
if_data = d[ifname]
print('multigraph trunk_{0}_traffic.{1}'.format(parent['sname'], if_data['sname']))
graph_interface_traffic(if_data)
|
Python client library for compute.rhino3d web service
Project description
compute_rhino3d
Python package providing convenience functions to call compute.rhino3d.com geometry web services
Project Hompage: https://github.com/mcneel/compute.rhino3d
Supported platforms
This is a pure python package and should work on all versions of python
Test
start python
>>> from rhino3dm import *
>>> import compute_rhino3d.Util
>>> import compute_rhino3d.Mesh
>>>
>>> compute_rhino3d.Util.authToken = AUTH_TOKEN_FROM (rhino3d.com/compute/login)
>>> center = Point3d(250, 250, 0)
>>> sphere = Sphere(center, 100)
>>> brep = sphere.ToBrep()
>>> meshes = compute_rhino3d.Mesh.CreateFromBrep(brep)
>>> print("Computed mesh with {} faces".format(len(meshes[0].Faces))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size File type Python version Upload date Hashes
Filename, size compute_rhino3d-0.12.2.tar.gz (61.7 kB) File type Source Python version None Upload date Hashes View |
Vi använder cookies för att göra din upplevelse bättre. För att följa det nya direktivet om e-integritet, behöver vi be om ditt medgivande att sätta cookies. Läs mer.
Magento font icons usage and examples
Icons are a simple and effective way to draw users into the content of your website. They can help you structure content and separate different sections of the page. The primary goal of using icons should be to help the user find information on the page.
Icons
With icons you can quickly sum up what your text is about. Use an icon that encapsulates the point you are trying to get across in your paragraph. This will make the text more accessible to your readers.
Create an icon
example of a simple icon
You can place icons just about anywhere using simple markup. We are going to use an inline HTML element such as <span> and add appropriate classes to it. These are required classes: ic and the icon's name prefixed with ic-, for example ic-star. Here's an example of the code which will add a star icon:
<span class="ic ic-star"></span> example of a simple icon
If you change the font-size of the icon's container, the icon gets bigger. Same things goes for color, drop shadow, and anything else that gets inherited using CSS.
Icon size
ic-lg
ic-2x
ic-3x
ic-4x
To increase icon size relative to the font-size of the icon's container, use the following classes: ic-lg (increases the size of the icon by 33%), ic-2x, ic-3x, ic-4x, ic-5x, ic-6x, ic-7x or ic-8x.
<span class="ic ic-star"></span> <span class="ic ic-star ic-lg"></span> ic-lg <span class="ic ic-star ic-2x"></span> ic-2x <span class="ic ic-star ic-3x"></span> ic-3x <span class="ic ic-star ic-4x"></span> ic-4x
If your icons are getting chopped off on top and bottom,
make sure you have sufficient line-height.
Inline styles
Now you can start having more fun with icons. By default all icons have the same color as text, but if you want to change the color of selected icon, you can do it with inline CSS styles. Add the style attribute to the icon element and specify the color.
You can add inline styles to icons the same way as to any other HTML elements in a HTML document. The style attribute can contain any CSS property, such as color, font-size, text-shadow etc.
<span class="ic ic-heart-o ic-3x"></span> <span class="ic ic-heart-o ic-3x" style="color: #e91e8f;"></span> <span class="ic ic-heart-o ic-3x" style="color: #95dc24;"></span>
Animated icon
Use the ic-spin class to get any icon to rotate.
<span class="ic ic-star ic-2x ic-spin" style="color: #be64e4;"></span> <span class="ic ic-reload ic-2x ic-spin" style="color: #5bd2ec;"></span>
Examples of icons
Iconboxes
Simple iconbox
example of an iconbox
To display an icon inside a box with background color (we call it an iconbox), add the ib class to the icon element. With the optional class ib-hover, the color of the iconbox will change on mouse hover over the iconbox.
Background color will be automatically added to the icon element. Make sure to leave the <span> tag empty – otherwise the contents of the tag will be displayed together with the icon and any additional space can dislocate the icon.
<span class="ic ic-star ib ib-hover"></span> example of an iconbox
The default background color and color of the icon can be configured in the admin panel:
Theme Design > Colors > Iconbox
Iconbox size
To increase iconbox size, use the following classes: ib-size-l, ib-size-xl, ib-size-xxl, ib-size-xxxl.
The icon size is independent of the iconbox size and can be increased with classes which were described earlier. For example, add class ic-lg to make the icon a little bit bigger.
<span class="ic ic-heart-o ib ib-hover"></span> <span class="ic ic-heart-o ic-lg ib ib-hover ib-size-l"></span> <span class="ic ic-heart-o ic-lg ib ib-hover ib-size-xl"></span> <span class="ic ic-heart-o ic-2x ib ib-hover ib-size-xxl"></span> <span class="ic ic-heart-o ic-3x ib ib-hover ib-size-xxxl"></span>
Iconbox shape
To change the shape of the iconbox, use one of the following classes: ib-circle, ib-rounded, ib-square. By default the iconbox is always circular.
<span class="ic ic-star ic-lg ib ib-hover ib-size-l"></span> <span class="ic ic-star ic-lg ib ib-hover ib-size-l ib-rounded"></span> <span class="ic ic-star ic-lg ib ib-hover ib-size-l ib-square"></span>
Iconbox effects
To add eye-catching hover effects to the iconbox, use one of the following combinations of classes. Note that in each case the combination consists of two classes:
ib-ef-1 ib-ef-1a
ib-ef-1 ib-ef-1b
ib-ef-2 ib-ef-2a
ib-ef-2 ib-ef-2b
ib-ef-3 ib-ef-3a
ib-ef-3 ib-ef-3b
<span class="ic ic-plane ic-lg ib ib-size-l ib-ef-1 ib-ef-1a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-1 ib-ef-1b"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-2 ib-ef-2a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-2 ib-ef-2b"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-3 ib-ef-3a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-3 ib-ef-3b"></span>
Examples of iconboxes
Blocks of text with icon
Icons can help you structure content and separate different sections of the page. The primary goal of using icons should be to help the user find information on the page and with icons you can quickly sum up what your text is about. For example, when you build lists, instead of using standard bullets, you can use icons to draw attention to paragraphs and other blocks of content.
Simple block with icon
Heading Example
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks. Icons are an effective way to...
To create a simple block of text with an icon, wrap your text inside a <div> element with the feature class. Here's a minimal example:
<div class="feature"> <span class="left ic ic-star ic-2x" style="color: #5bd2ec;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks. Icons are an effective way to...</p> </div>
If you add left or right class to the icon, the icon will be taken from the normal flow and placed along the left or right side of its container, and text will wrap around it.
Indented block
To display a block with indentation on the left side, add the indent class to the block element:
To increase the size of the indentation, use the following classes together with the indent class: indent-size-l, indent-size-xl, indent-size-xxl, indent-size-xxxl.
<div class="feature feature-icon-hover indent"> <span class="left ic ic-star ic-2x" style="color: #de2666;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
Block with iconbox and hover effect
To change the background color of the iconbox on mouse hover over the entire block, add the feature-icon-hover class to the block element.
If you increase the iconbox size (by adding a class such as ib-size-xl), you will also need to add corresponding class (in this case: indent-size-xl) to the block element. It will adjust the size of the indentation.
<div class="feature feature-icon-hover indent indent-size-xl"> <span class="left ic ic-star ic-lg ib ib-size-xl"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
The default background color and color of the icon can be configured in the admin panel:
Theme Design > Colors > Iconbox
More complex example
Above heading
Heading Example
Text below heading
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.
Example of another text paragraph inside a block. Icons are an effective way to draw users into the content of your store.
Read more...
Here's another, more complex example with additional headings and nested blocks. To change the background color of the iconbox you can use inline styles. Add the style attribute to the iconbox element and specify the background color.
<div class="feature indent indent-size-xl"> <span class="left ic ic-home ic-lg ib ib-size-xl" style="background-color: #ffb13e;"></span> <h6 class="above-heading">Above heading</h6> <h4>Heading Example</h4> <h6 class="below-heading">Text below heading</h6> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">1</span> <p>Lorem ipsum dolor sit, consectetur adipiscing elit.</p> </div> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">2</span> <p>Lid est laborum et dolorum fuga et harum quidem.</p> </div> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">3</span> <p>Seq et perspser iciatis unde omnis iste nautis.</p> </div> <p>Example of another text paragraph inside a block. Icons are an effective way to draw users into the content of your store.</p> <a href="#">Read more...</a> </div>
Centered block
Heading Example
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.
To align elements of the block to the center, use the centered class.
<div class="feature centered"> <span class="ic ic-lightbulb ic-2x ib ib-size-xl" style="background-color: #bf78dd;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
Font Awesome icons
Font Awesome is a font and icon toolkit based on CSS. It offers a collection of more than 600 vector icons which can be easily customized (the same as other font icons available in the theme).
Basic Font Awesome icons
Use the fa class and the icon's name with an inline HTML element span. Here's an example of the code which will create a flag icon:
<span class="fa fa-flag fa-3x" style="color: #1b926c;"></span>
Use Font Awesome icons with other icon classes
You can use Font Awesome icons together with other icon classes described in this document. Here's an example of an iconbox element (the ib class) with Font Awesome icon inside a block
<div class="feature feature-icon-hover indent indent-size-l"> <span class="ic ic-2x ib ib-size-l left fa fa-flag" style="background-color: #71d1b3;"></span> <h4>Heading Example</h4> <p>This is a short paragraph of sample text inside a block.</p> </div>
|
Edit: Here's a perhaps slightly more accessible version of the my long-winded original post:It's just vectors, right. I can create a little gender-nobility continuum and put some words on it like so:
+--------------+
| gender |
+----------+------+-------+
| | man | woman |
| nobility +------+-------+
| | king | queen |
+----------+------+-------+
my_vecs = open('my_vecs.txt','w+')
my_vecs.write('4 2\nman -1.0 -1.0\nwoman 1.0 -1.0\nking -1.0 1.0\nqueen 1.0 1.0')
my_vecs.close()
my_vecs = KeyedVectors.load_word2vec_format("my_vecs.txt")
results = my_vecs.most_similar(positive=['king','woman'],negative=['man'])
print(results)
# ('queen', 0.9999999403953552)]
Big surprise, right? So we can skip over "how does this work," because that's easy, and get right to the deeper question with regard to mainstream practices, "how do these words get coordinates such that the equation holds?" For this, look to the training methods, which vary, but are largely spatially relational in the sequence, as in relating words in sequential proximity and otherwise. Unfortunately, this doesn't build a space of meaning like the equation hopes for, but rather builds a space where words are related (varying slightly by methods) by the frequency that a word appears in proximity to another word. That's essentially all there is to it. You can look at my code examples below to see it in action.
--- original post begins ---
As you surely recall from our previous conversations, networks produce the results you've described precisely because they were designed to, which is generally to combat all forms of ambiguity in language modeling, or, said another way, to preserve more information than can be preserved by mere tokenized word sequences. An example application objective would be to extract some information as pertains to some understanding---and I stress 'understanding' here, in that we're involving the concept of meaning from the very outset---from some sequence of text. For example, probability of spam as a function of email content, or a person's political alignment as a function of the contents of their tweets. General practices involve tokenizing words according to some criteria (e.g. order or frequency of occurrence, etc), which would be fine if words and their orders had precisely one meaning, but that's clearly a preposterous expectation of human language; not only do words have multiple (and frequently very unrelated) meanings, syntax can vary wildly and even carry meaning itself! There are many reasons why quantizing language makes for difficult comprehension and modeling. After all, it's essentially setting out to model a continuum by first quantizing all your information.
Thankfully, topological semiotics can ameliorate this. In great brevity, there are two key concepts relevant to this discussion:
- An ideal simulacrum of the physically real is as continuous as physical reality.
- "Comprehensible space" (a manifold of aggregated interpretants) receives novel input only as differentials.
The first of these, as pertains to this explanation, simply indicates (borrowing from Charles Sanders Peirce's triadic model) that an interpretant (a subjective experiential understanding of reality, if you will) should be as continuous as the object whose impressions became the signals that instigated it. Relating this to some of the aforementioned problems, consider that the meaning of the word "under" is not (in any realistically comprehensible way) related to the meanings of its constituent signs (e.g. letters), just as the meaning of "under the weather" is scarcely relatable to the meaning of its constituent signs (e.g. collocations, words, letters, and so-on); understanding the meaning of this idiom depends on knowledge of both human interaction with storms (e.g. to know that one might become ill), and an understanding of the human experience of illness (to know that this is generally undesirable). Attempting to quantize this continuous nature as a hierarchy as we tend to attempt (e.g. moments ago when I mentioned constituent signs) is both unnecessary because we can model meaning continuously, and futile because hierarchies are themselves constructs. In simpler terms: manifold learning is an ideal choice for simulating relative sign meanings.
The second above concept may seem strange and unrelated, but it carries several critical implications, of which the following is most pertinent: what is known can only exist relative to what has been known. In the more elegant words of Roland Barthes, "No sooner is a form seen than it must resemble something: humanity seems doomed to analogy." This permits imagination, but confines understanding to the space of that which has been previously experienced. In other words, experiences of reality can only exist relative to themselves; our model of language meanings can only describe meaning relative to that from which its landscape was shaped. In our application, the transformation we end up with (i.e. the features of the network), which typically receives tokenized sequences and returns vector representations within the manifold of our designing, can only provide meanings relative to the corpus on which it was trained (and, indeed, the route of navigation through that corpus), varying in depiction---which is to say, varying in the way that it describes meaning---by the method of modeling. For example, the "skipgram" model describes meaning as spatially relational context (meaning points to context), while the "continuous bag of words" model describes meaning as consisting of spatially relational context (context points to meaning).
There are obviously some heavy assumptions being made here, and not exclusively good ones. We know that relative frequency of relative sequential word position doesn't truly carry all the meanings that can be crafted into a sequence. This should come as no surprise, of course, since we're attempting to quantize a continuous relationship; creating a discrete manifold of understanding to describe continuous relationships. Shame on us, but, as you can see, it's a difficult habit to break. Nevertheless, the key take-away here is that the primary objective described above, regardless of which method you use to generate your model, is to find an equation that transforms the vector representations of tokenized sequences into vector representations of relative meanings---or, at least, the best simulacrum that a particular corpus, technique, and architecture can provide. As before, what a particular axis (or dimension) represents varies by method, and can be as arbitrary as x, y and z, or quite specific. For example, if your purposes can afford a softmax activation function, you can describe vector representations as relative constituency, and that's amusingly elegant: you could describe everything as pertains to its relationship with the words "man," "bear," and "pig," for which the mythological "man-bear-pig" might dwell somewhere in the midst. For better understanding, we can observe the same action in reverse: the secondly mentioned concept of topological semiotics indicates that an understanding of a "man-bear-pig" depends solely on understanding(s) of "man," "bear," "pig," and nothing more. As predicted, training with a softmax activation function, which is a constrained topology, indeed requires precisely that!
In terms perhaps more familiar to the linguistically inclined, consider this alternative depiction: the word "man" can produce ample interpretants, especially since the nature of interpretants should be expected to be, as aforementioned, pretty continuous. For example, the word "queen" could be used in reference to a monarch, or to a suit of playing cards, or to a person bearing such a name, among other things. Meanwhile, a queen (monarch) of the lineage "Queen" could appear more or less similar to a queen (playing card); did Lewis Carroll not evoke precisely this depiction? We can make our models high-dimensional to ameliorate the quantization inherent in dimensionality (much as how increasing the number of edges of a polygon better simulates a circle), giving more freedom for relational complexity: "man" and "woman" can reside simultaneously near to each other along some axes (e.g. such that a region might resemble "species") and distant along others (e.g. such that a region might resemble "gender"). Thankfully, we're able to understand our transforming from sign to interpretant (and so-on) because these operations are entirely self-supervised, and which is the action of understanding the meaning of what you're reading. So, then, if I ask you for a word with a meaning most closely resembling that of "big" in the phrase "a big pizza," you can consider the meaning of "big" as pertains to the given sentence, and find something very close to it (literally proximal on the manifold of your comprehensibility): perhaps the word "large." The transformation just performed in our minds is equivalent to that which these models attempt to simulate. Notice that removing the first word of the proposed sequence, leaving us with simply "big pizza," could instead refer to the domain of corporate pizza, demonstrating that sequential context indeed carries information. Tokenizing by word frequency simulates density, such that "big pizza" still most likely approximately means "a large pizza," just as your equation could be interpreted as pointing toward an emasculated ruler with strong empathic faculties; a concept which simply arises in written English infrequently, just as in the that which lies beneath (e.g. imagination, physical reality, and so-on).
So that's all quite a lot of words, however I fear I've left you parched for meaning; preferring to circle back around with this understanding: how do these kinds of models permit the behavior indicated by the equation in question? It's truly just as easy as aforementioned: the network features represent a transformation from the coordinate system of one manifold to another (ideally the easiest for a given dimensionality, sought, for example, with linear regression). In this case, you could loosely consider the transformation as one between a coordinate system of a sample of written language and one of (a simulacrum of) spatially contextual relative meaning. Precisely what aspects of a transformation the features represent depends, as aforementioned, largely on the technique and corpus used, and although this can vary to almost any degree one wishes it to, a wild and whacky vector space is just fine so long as we only make direct comparisons in the same vector space. Notice that a corpus's features are resultant of transformation from some other manifold (e.g. something like experiential reality spanning to written form), so by extension a simulacrum of a written language can access information about manifolds underlying itself, not exceeding the extent permitted by the transformations spanning thereto (e.g. breadth of experiences underlying the generation of the writing that constitutes the corpus). This is lovely in theory, but typically very messy in practice.
When we look at the equation you described, as in looking at most conceptual depictions of word vectors (e.g. search that in google images), it's easy to think that the vector of word "king" plus the vector of word "woman" minus the vector of the word "man" approximately equals the vector of the word "queen," but that interpretation would be severely myopic. Rather, the vector of a generalized spatially contextual relative meaning of "king" added to the same of "woman" and subtracting the same of "man" results in a vector that points toward a region of our manifold. If we try to describe what that region represents, we'll need to transform it to something we can talk about (the same kind of coordinate transformation, except done by our minds, typically called "reading"). The actual meaning of the equation becomes far more comprehensible if we pull a Baudrillard and speak in terms of a map. We can create our manifold (map) with any dimensionality, and, in the same way that latitude and longitude describe a position on a plane, we can describe our n-dimensional map with a vector for each axis. In simpler terms, think of the output of our transformation (network) as coordinates. We can do vector math like the equation in question, and the coordinates we end up with are not ambiguous. However, to talk about what's on that region, we'll need words, nearest of which---in the reference frame of written English, and for having used our corpus---is "queen." Again, we are the ones who make this transformation from our engineered manifold (machine-learnt) to one of written English (my writing this, now); we can only compare to what we know. In other words, the word2vec token nearest the coordinates of the output is "queen."
So, again, what do the coordinates on our map point to, after following the equation in question; transforming into the coordinate system of our engineered map of a spatially contextual relative understanding of written English? We could invent a word to describe precisely that point, although we apparently scarcely need one (since one does not already exist); in fact, the more precisely a word points to a meaning, the less frequently it will tend to be useful---a natural result of a quantized continuum (e.g. in choosing one number on a continuum, the probability of selecting precisely any one number goes to zero), although not exclusively influenced thereby. Again, however, if we ask which word within our corpus lies nearest to this point indicated by the coordinates produced by the equation in question, the answer (for example, using Gensim and GloVe trained on Wikipedia 2014 + Gigaword 5 (6 billion tokens and 200 dimensions) in word2vec format) is the token representing "queen," thus its approximate equality. Observe:
coordinates = pd.DataFrame()
coordinates['king'] = vectors.get_vector('king')
coordinates['woman'] = vectors.get_vector('woman')
coordinates['king+woman'] = coordinates['king'] + coordinates['woman']
coordinates['man'] = vectors.get_vector('man')
coordinates['king+woman-man'] = coordinates['king+woman'] - coordinates['man']
coordinates['queen'] = vectors.get_vector('queen')
coordinates.head() # shows the first 5 of 200 dimensions for each column
'''
+---+-----------+----------+------------+----------+----------------+-----------+
| | king | woman | king+woman | man | king+woman-man | queen |
+---+-----------+----------+------------+----------+----------------+-----------+
| 0 | -0.493460 | 0.52487 | 0.031410 | 0.10627 | -0.074860 | 0.466130 |
+---+-----------+----------+------------+----------+----------------+-----------+
| 1 | -0.147680 | -0.11941 | -0.267090 | -0.58248 | 0.315390 | -0.097647 |
+---+-----------+----------+------------+----------+----------------+-----------+
| 2 | 0.321660 | -0.20242 | 0.119240 | -0.27217 | 0.391410 | -0.072473 |
+---+-----------+----------+------------+----------+----------------+-----------+
| 3 | 0.056899 | -0.62393 | -0.567031 | -0.26772 | -0.299311 | -0.037131 |
+---+-----------+----------+------------+----------+----------------+-----------+
| 4 | 0.052572 | -0.15380 | -0.101228 | -0.11844 | 0.017212 | -0.169970 |
+---+-----------+----------+------------+----------+----------------+-----------+
'''
# it's not like the equation was referring to eigenqueen anyway...
vectors.most_similar(positive=['king', 'woman'], negative=['man'], topn=3)
'''
[('queen', 0.6978678703308105),
('princess', 0.6081745028495789),
('monarch', 0.5889754891395569)]
'''
(The similarity to 'queen' is slightly lower in the example above than in those that follow because the Gensim object's most_similar method l2-normalizes the resultant vector.)
similarity = cosine_similarity(coordinates['queen'].values.reshape((-1,200)),
coordinates['king+woman-man'].values.reshape((-1,200)))
print('Similarity: {}'.format(similarity))
# Similarity: [[0.71191657]]
# let's assign a word/token for the equation-resultant coordinates and see how it compares to 'queen'
vectors.add(entities=['king+woman-man'],weights=[coordinates['king+woman-man'].values])
distance = vectors.distance('king+woman-man','queen')
print('Distance: {}'.format(distance))
# Distance: 0.28808343410491943
# Notice that similarity and distance sum to one.
Why are the equation-resultant coordinates only 71% similar to those of the word "queen"? There are two big factors:
Firstly, by seeking to transform coordinates into a word, one attempts to make transformations inverse to those that got us to coordinates in the first place. Thus, as one can only select as correct from the discrete (tokenized) words, of which "queen" is the nearest, we settle for it. That being said, leaving our information in encoded form is fine for use in other neural networks, which adds to their practical value, and implies that word embeddings used in deep neural networks can be expected to perform slightly better in application than they do under human-language-based scrutiny.
Speaking of which, 71% isn't an especially good performance; why did it not do better? After all, is not the implication of the equation plain to see? Nonsense! The meaning we see in the equation is thoroughly embedded in our experiential understandings of reality. These models don't produce quite the results we'd like, yet better than we should've hoped for, and often entirely sufficiently for our purposes. Just as translation out of the constructed manifold into written language is cleaved as needed for translation (i.e. so we can write about where the vectors pointed, as we did just now), so, too, was meaning cleaved before our machine-learnt transformation in the first place, by nature of our having first quantized our signals in tokenization. The equation does not mean what its writer intended for it to mean. Its expressions are poorly phrased, both input and thereby output. Written as plainly as I can rightly comprehend, our translator performs marginally in this specific task (in part) because our translations both prior to and following are also marginal. We should be glad that this equation holds at all, and ought not expect as much in many intuitively logically similar cases. Observe:
vectors.most_similar(positive=['patriarch','woman'], negative=['man'], topn=31)
'''
[('orthodox', 0.5303177833557129),
('patriarchate', 0.5160591006278992),
('teoctist', 0.5025782585144043),
('maronite', 0.49181658029556274),
('constantinople', 0.47840189933776855),
('antioch', 0.47702693939208984),
('photios', 0.47631990909576416),
('alexy', 0.4707275629043579),
('ecumenical', 0.45399680733680725),
('sfeir', 0.45043060183525085),
('diodoros', 0.45020371675491333),
('bartholomew', 0.449684739112854),
('irinej', 0.4489184319972992),
('abune', 0.44788429141044617),
('catholicos', 0.4440777003765106),
('kirill', 0.44197070598602295),
('pavle', 0.44166091084480286),
('abuna', 0.4401337206363678),
('patriarchy', 0.4349902272224426),
('syriac', 0.43477362394332886),
('aleksy', 0.42258769273757935),
('melkite', 0.4203716516494751),
('patriach', 0.41939884424209595),
('coptic', 0.41715356707572937),
('abbess', 0.4165824055671692),
('archbishop', 0.41227632761001587),
('patriarchal', 0.41018980741500854),
('armenian', 0.41000163555145264),
('photius', 0.40764760971069336),
('aquileia', 0.4055507183074951),
('matriarch', 0.4031881093978882)] # <--- 31st nearest
'''
If you change 'woman' to 'female' and change 'man' to 'male', the rank falls from an already abysmal 31st to 153rd! I'll explain why in a moment. Observe that as much as we'd like to think we're dealing with relative meanings, that simply isn't correct. That doesn't mean, however, that it isn't super useful for many applications!
vectors.most_similar(positive=['metal'], negative=['genre'], topn=3)
'''
[('steel', 0.5155385136604309),
('aluminum', 0.5124942660331726),
('aluminium', 0.4897114634513855)]
'''
vectors.most_similar(positive=['metal'], negative=['material'], topn=3)
'''
[('death/doom', 0.43624603748321533),
('unblack', 0.40582263469696045),
('death/thrash', 0.3975086510181427)]
'''
# seems about right
Why such variance in performance? There isn't any; it's doing precisely what it was designed to do. The discrepancy isn't in the network, but in our expectations of it. This is the second aforementioned big factor: we see words whose meanings we know, so we think that we know the meanings of the words we see. We're returned 'queen' not because that's the word for a king who isn't a man and is a woman. Sure, there is a non-zero contribution of relative meanings, but that's a secondary action. If we aren't dealing with relative meanings, what do the outputs represent? Recall that I described the output of our transformation (network) as a "generalized spatially contextual relative meaning," the spatially contextual relativity of which is the inevitable result of the architectures and/or unsupervised mechanisms typically applied. As before, spatial relativity certainly carries some meaningful information, but written English employs many parameters in delivering meaning. If you want richer meaning to your theoretical manifolds than spatially contextual relative meaning, you'll need to design a method of supervision more suited to your desired or expected performance.
With this in mind, and looking to the code-block above, it's clear that 'metal' when referring specifically to not-'genre' produces vectors near types of metallic materials, and likewise 'metal' when referring specifically to not-'material' produces vectors near types of metal genres. This is almost entirely because tokens whose vectors are near to that of 'metal' but far from that of 'genre' seldom appear in spatial proximity with references to 'metal' as a genre, and likewise the whole lot for 'material.' In simpler terms, how often, when writing about physical metallicity, does one mention music genres? Likewise, how often, when writing about death metal (music genre) does one speak of steel or aluminum? Now it should be clear why the results of these two examples can seem so apt, while the patriarch/matriarch expectation fell flat on its face. It should also make the underlying action of the result of the equation in question quite clear.
So, all said, what is it about a model like word2vec that makes the equation hold true? Because it provides a transformation from one coordinate system to another (in this case, from a simulacrum of written English to one of spatially contextual relative meaning), which occurs frequently enough in general written English as to satisfy the given equation, behaving precisely as was intended by model architecture. |
Learn To Code In Python
Teaches you how to code in python. By PYer
This tutorial excpects some basic knowledge of coding in another language.
What is python?
Python is a very popular coding language. Little people use it for serious projects, but it is still useful to learn. It was created in 1991 by Guido van Rossum.
Look at a few uses of python:
Desktop Applications
Web Applications
Complex Scientific Equations
Let's look at a few reasons why it is useful:
Readable/Understandable Code
Compatible with other systems/platforms
Millions of useful modules
These are just a few, you can find a bunch more by researching it.
Know This Before We Start
What we will be teaching you is specifically python 3. This is the most updated version, but the version 2 is still widely used.
Here we will be using replit, but there are multiple text editors you can find.
Emacs
Komodo Edit
Vim
Sublime Text
More at Python Text Editors
Python Syntax
Python syntax was made for readability, and easy editing. For example, the python language uses a : and indented code, while javascript and others generally use {} and indented code.
First Program
Lets create a python 3 repl, and call it Hello World. Now you have a blank file called main.py. Now let us write our first line of code:
helloworld.py
print('Hello world!')
Brian Kernighan actually wrote the first "Hello, World!" program as part of the documentation for the BCPL programming language developed by Martin Richards.
Now, press the run button, which obviously runs the code. If you are not using replit, this will not work. You should research how to run a file with your text editor.
Command Line
If you look to your left at the console where hello world was just printed, you can see a >, >>>, or $ depending on what you are using. After the prompt, try typing a line of code.
Python 3.6.1 (default, Jun 21 2017, 18:48:35)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
> print('Testing command line')
Testing command line
> print('Are you sure this works?')
Are you sure this works?
>
The command line allows you to execute single lines of code at a time. It is often used when trying out a new function or method in the language.
New: Comments!
Another cool thing that you can generally do with all languages, are comments. In python, a comment starts with a #. The computer ignores all text starting after the #.
shortcom.py
# Write some comments!
If you have a huge comment, do not comment all the 350 lines, just put ''' before it, and ''' at the end. Technically, this is not a comment but a string, but the computer still ignores it, so we will use it.
longcom.py
'''
Dear PYer,
I am confused about how you said you could use triple quotes to make
SUPER
LONG
COMMENTS
!
I am wondering if this is true,
and if so,
I am wondering if this is correct.
Could you help me with this?
Thanks,
Random guy who used your tutorial.
'''
print('Testing')
New: Variables!
Unlike many other languages, there is no var, let, or const to declare a variable in python. You simply go name = 'value'.
vars1.py
x = 5
y = 7
z = x*y # 35
print(z) # => 35
Remember, there is a difference between integers and strings. Remember: String = "". To convert between these two, you can put an int in a
str() function, and a string in a int() function. There is also a less used one, called a float. Mainly, these are integers with decimals. Change them using the float() command.
vars2.py
x = 5
x = str(x)
b = '5'
b = int(b)
print('x = ', x, '; b = ', str(b), ';') # => x = 5; b = 5;
Instead of using the , in the print function, you can put a + to combine the variables and string.
Operators
There are many operators in python:
+
-
/
*
These operators are the same in most languages, and allow for addition, subtraction, division, and multiplicaiton.
Now, we can look at a few more complicated ones:
%
//
**
+=
-=
/=
*=
Research these if you want to find out more...
simpleops.py
x = 4
a = x + 1
a = x - 1
a = x * 2
a = x / 2
You should already know everything shown above, as it is similar to other languages. If you continue down, you will see more complicated ones.
complexop.py
a += 1a -= 1a *= 2a /= 2
The ones above are to edit the current value of the variable.
Sorry to JS users, as there is no i++; or anything.
Fun Fact:
The python language was named after Monty Python.
If you really want to know about the others, view Py Operators
More Things With Strings
Like the title?
Anyways, a ' and a " both indicate a string, but do not combine them!
quotes.py
x = 'hello' # Goodx = "hello" # Goodx = "hello' # ERRORRR!!!
slicing.py
String Slicing
You can look at only certain parts of the string by slicing it, using [num:num].
The first number stands for how far in you go from the front, and the second stands for how far in you go from the back.
x = 'Hello everybody!'x[1] # 'e'x[-1] # '!'x[5] # ' 'x[1:] # 'ello everybody!'x[:-1] # 'Hello everybod'x[2:-3] # 'llo everyb'
Methods and Functions
Here is a list of functions/methods we will go over:
.strip()
len()
.lower()
.upper()
.replace()
.split()
I will make you try these out yourself. See if you can figure out how they work.
strings.py
x = " Testing, testing, testing, testing "
print(x.strip())
print(len(x))
print(x.lower())
print(x.upper())
print(x.replace('test', 'runn'))
print(x.split(','))
Good luck, see you when you come back!
New: Input()
Input is a function that gathers input entered from the user in the command line. It takes one optional parameter, which is the users prompt.
inp.py
print('Type something: ')
x = input()
print('Here is what you said: ', x)
If you wanted to make it smaller, and look neater to the user, you could do...
inp2.py
print('Here is what you said: ', input('Type something: '))
Running:inp.py
Type something:Hello WorldHere is what you said: Hello World
inp2.py
Type something: Hello WorldHere is what you said: Hello World
New: Importing Modules
Python has created a lot of functions that are located in other .py files. You need to import these modules to gain access to the,, You may wonder why python did this. The purpose of separate modules is to make python faster. Instead of storing millions and millions of functions, , it only needs a few basic ones. To import a module, you must write input <modulename>. Do not add the .py extension to the file name. In this example , we will be using a python created module named random.
module.py
import random
Now, I have access to all functions in the random.py file. To access a specific function in the module, you would do <module>.<function>. For example:
module2.py
import random
print(random.randint(3,5)) # Prints a random number between 3 and 5
Pro Tip:
Dofrom random import randintto not have to dorandom.randint(), justrandint()
To import all functions from a module, you could dofrom random import *
New: Loops!
Loops allow you to repeat code over and over again. This is useful if you want to print Hi with a delay of one second 100 times.
for Loop
The for loop goes through a list of variables, making a seperate variable equal one of the list every time.
Let's say we wanted to create the example above.
loop.py
from time import sleep
for i in range(100):
print('Hello')
sleep(.3)
This will print Hello with a .3 second delay 100 times. This is just one way to use it, but it is usually used like this:
loop2.py
import time
for number in range(100):
print(number)
time.sleep(.1)
while Loop
The while loop runs the code while something stays true. You would put while <expression>. Every time the loop runs, it evaluates if the expression is True. It it is, it runs the code, if not it continues outside of the loop. For example:
while.py
while True: # Runs forever
print('Hello World!')
Or you could do:
while2.py
import random
position = '<placeholder>'
while position != 1: # will run at least once
position = random.randint(1, 10)
print(position)
New: if Statement
The if statement allows you to check if something is True. If so, it runs the code, if not, it continues on. It is kind of like a while loop, but it executes only once. An if statement is written:
if.py
import random
num = random.randint(1, 10)
if num == 3:
print('num is 3. Hooray!!!')
if num > 5:
print('Num is greater than 5')
if num == 12:
print('Num is 12, which means that there is a problem with the python language, see if you can figure it out. Extra credit if you can figure it out!')
Now, you may think that it would be better if you could make it print only one message. Not as many that are True. You can do that with an elif statement:
elif.py
import random
num = random.randint(1, 10)
if num == 3:
print('Num is three, this is the only msg you will see.')
elif num > 2:
print('Num is not three, but is greater than 1')
Now, you may wonder how to run code if none work. Well, there is a simple statement called else:
else.py
import random
num = random.randint(1, 10)
if num == 3:
print('Num is three, this is the only msg you will see.')
elif num > 2:
print('Num is not three, but is greater than 1')
else:
print('No category')
New: Functions (def)
So far, you have only seen how to use functions other people have made. Let use the example that you want to print the a random number between 1 and 9, and print different text every time.
It is quite tiring to type:
Characters: 389
nofunc.py
import random
print(random.randint(1, 9))
print('Wow that was interesting.')
print(random.randint(1, 9))
print('Look at the number above ^')
print(random.randint(1, 9))
print('All of these have been interesting numbers.')
print(random.randint(1, 9))
print("these random.randint's are getting annoying to type")
print(random.randint(1, 9))
print('Hi')
print(random.randint(1, 9))
print('j')
Now with functions, you can seriously lower the amount of characters:
Characters: 254
functions.py
import random
def r(t):
print(random.randint(1, 9))
print(t)
r('Wow that was interesting.')
r('Look at the number above ^')
r('All of these have been interesting numbers.')
r("these random.randint's are getting annoying to type")
r('Hi')
r('j')
There you go! Try making your own functions!
The End
Now you know all of the basics of python. Congratulations!
Please upvote. Thanks! |
[1] Python Made EZ! ð
HîïÃīįì everyone!
Hope y'all are doing great! School is starting real soon, so I hope you have been studying to get ready you are enjoying the last of vacation!
So I made this tutorial on python so that others can try to learn from it and get better! Hopefully, what I say will be comprehensive and easy to read.
Most of it I will write, but sometimes I will include some stuff from other websites which explain better than me. I will put what I've taken in italic, and the sources and helpful links at the bottom.
By the way, this is the first of tutorials in languages I'm making!
I will be covering:
Hello World!: History of Python
Key Terms
Comments
print
Data Types
Variables
- Printing Variables
- Naming Variables
- Changing Variables
Concatenation
Operators
Comparison Operators
Conditionals
-if
-elif
-else
input
A Bit of Lists
forLoops
whileLoops
Functions
Imports
-time
-random
-math
Small Programs and Useful Stuff
ANSIEscape Codes
Links
Goodbye World!: End
Well without any further ado, let's get on with it!
Hello World!: History of Python
Python is a general purpose programming language. It was created by Guido Van Rossum and released in 1991. One of the main features of it is its readability, simple syntax, and few keywords, which makes it great for beginners (with no prior experience of coding) to learn it.
Fun fact: Guido Van Rossum was reading the scripts of Monty Python when he was creating the language; he needed "a name that was short, unique, and slightly mysterious" so he decided to call the language Python.
(Last year we had to make a poem on a important person in Computer Science, so I made one on him: https://docs.google.com/document/d/1yf2T2fFaS3Vwk7zkvN1nPOr8XPXJroL1yHI7z5qhaRc/edit?usp=sharing)
Key Terms
Now before we continue, just a few words you should know:
Console: The black part located at the right/bottom of your screen
Input: stuff that is taken in by the computer (more on this later)
Ouput: the information processed and sent out by the computer (usually in the console)
Errors: actually, a good thing! Don't worry if you have an error, just try to learn from it and correct it. That's how you can improve, by knowing how to correct errors.
Execute: run a piece of code
Comments
Comments are used for explaining your code, making it more readable, and to prevent execution when testing code.
This is how to comment:
# this is a comment# it starts with a hashtag ## Python will ignore and not run anything after the hashtag
You can also have multiline comments:
"""this is a multiline commentI can make it very long!"""
The print() functions is used for outputting a message (object) onto the console. This is how you use it:
print("Something.")
# remember this is a comment
# you can use double quotes "
# or single quotes '
print('Using single quotes')
print("Is the same as using double quotes")
You can also triple quotes for big messages.
Example:
print("Hello World!")
print("""
Rules:
[1] Code
[2] Be nice
[3] Lol
[4] Repeat
""")
Output:
Hello World!Rules:[1] Code [2] Be nice[3] Lol[4] Repeat
Data Types
Data types are the classification or categorization of data items.
These are the 4 main data types:
int: (integer) a whole number
12 is an int, so is 902.
str: (string) a sequence of characters
"Hi" is a str, so is "New York City".
float: (float) a decimal
-90 is a float, so is 128.84
bool: (boolean) data type with 2 possible values; True and False
Note that True has a capital T and
False has a
capital!
F
Variables
Variables are used for containing/storing information.
Example:
name = "Lucy" # this variable contains a str
age = 25 # this variable contains an int
height = 160.5 # this variable contains a float
can_vote = True # this variable contains a Boolean that is True (because Lucy is 25 y/o)
Printing variables:
To print variables, you simply do print(variableName):
print(name)
print(age)
print(height)
print(can_vote)
Output:
Lucy
25
160.5
True
Naming Variables:
You should try to make variables with a descriptive name. For example, if you have a variable with an age, an appropriate name would be age, not how_old or number_years.
Some rules for naming variables:
must start with a letter (not a number)
no spaces (use underscores)
no keywords (like print,input,or, etc.)
Changing Variables:
You can change variables to other values.
For example:
x = 18
print(x)
x = 19
print(x)
# the output will be:
# 18
# 19
As you can see, we have changed the variable x from the initial value of 18 to 19.
Concatenation
Let's go back to our first 3 variables:
name = "Lucy"
age = 25
height = 160.5
What if we want to make a sentence like this:Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Of course, we could just print that whole thing like this:print("Her name is Lucy, she is 25 years old and she measures 160.5 cm.")
But if we want to do this with variables, we could do it something like this:
print("Her name is " + name + ", she is " + age + " years old and she measures " + height + " cm.")
# try running this!
Aha! If you ran it, you should have gotten this error:
Basically, it means that you cannot concatenate int to str. But what does concatenate mean?
Concatenate means join/link together, like the concatenation of "sand" and "castle" is "sandcastle"
In the previous code, we want to concatenate the bits of sentences ("Her name is ", ", she is", etc.) as well as the variables (name, age, and height).
Since the computer can only concatenate str together, we simply have to convert those variables into str, like so:
print("Her name is " + name + ", she is " + str(age) + " years old and she measures " + str(height) + " cm.")
# since name is already a str, no need to convert it
Output:
Her name is Lucy, she is 25 years old and she measures 160.5 cm.
Operators
A symbol or function denoting an operation
Basically operators can be used in math.
List of operators:
+For adding numbers (can also be used for concatenation) | Eg: 12 + 89 = 101
-For subtracting numbers | Eg: 65 - 5 = 60
*For multiplying numbers | Eg: 12 * 4 = 48
/For dividing numbers | Eg: 60 / 5 = 12
**Exponentiation ("to the power of") | Eg: 2**3 = 8
//Floor division (divides numbers and takes away everything after the decimal point) | Eg: 100 // 3 = 33
%Modulo (divides numbers and returns whats left (remainder)) | Eg: 50 % 30 = 20
These operators can be used for decreasing/increasing variables.
Example:
x = 12
x += 3
print(x)
# this will output 15, because 12 + 3 = 15
You can replace the + in += by any other operator that you want:
x = 6
x *= 5
print(x)
y = 9
y /= 3
print(y)
# this will output 30 and then below 3.
Also: x += y is just a shorter version of writing x = x + y; both work the same
Comparison Operators
Comparsion operators are for, well, comparing things. They return a Boolean value, True or False. They can be used in conditionals.
List of comparison operators:
==equal to | Eg: 7 == 7
!=not equal to | Eg: 7 != 8
>bigger than | Eg: 12 > 8
<smaller than | Eg: 7 < 9
>=bigger than or equal to | Eg: 19 >= 19
<=smaller than or equal to | Eg: 1 <= 4
If we type these into the console, we will get either True or False:
6 > 7 # will return False
12 < 80 # will return True
786 != 787 # will return True
95 <= 96 # will return True
Conditionals
Conditionals are used to verify if an expression is True or False.
if
Example: we want to see if a number is bigger than another one.
How to say in english: "If the number 10 is bigger than the number 5, then etc.
How to say it in Python:
if 10 > 5:
# etc.
All the code that is indented will be inside that if statement. It will only run if the condition is verified.
You can also use variables in conditionals:
x = 20
y = 40
if x < y:
print("20 is smaller than 40"!)
# the output of this program will be "20 is smaller than 40"! because the condition (x < y) is True.
elif
elif is basically like if; it checks if several conditions are True
Example:
age = 16
if age == 12:
print("You're 12 years old!")
elif age == 14:
print("You're 14 years old!")
elif age == 16:
print("You're 16 years old!")
This program will output:
You're 16 years old!
Because age = 16.
else
else usually comes after the if/elif. Like the name implies, the code inside it only executes if the previous conditions are False.
Example:
age = 12
if age >= 18:
print("You can vote!")
else:
print("You can't vote yet!)
Output:
You can't vote yet!
Because age < 18.
input
The input function is used to prompt the user. It will stop the program until the user types something and presses the return key.
You can assign the input to a variable to store what the user types.
For example:
username = input("Enter your username: ")
# then you can print the username
print("Welcome, "+str(username)+"!")
Output:
Enter your username: Bookie0Welcome, Bookie0!
By default, the input converts what the user writes into str, but you can specify it like this:
number = int(input("Enter a number: ")) # converts what the user says into an int
# if the user types a str or float, then there will be an error message.
# doing int(input()) is useful for calculations, now we can do this:
number += 10
print("If you add 10 to that number, you get: "+ str(number)) # remember to convert it to str for concatenation!
Output:
Enter a number: 189If you add 10 to that number, you get: 199
You can also do float(input("")) to convert it to float.
Now, here is a little program summarizing a bit of what you've learnt so far.
Full program:
username = input("Username: ")
password = input("Password: ")
admin_username = "Mr.ADMIN"
admin_password = "[email protected]"
if username == admin_username:
if password == admin_password:
print("Welcome Admin! You are the best!")
else:
print("Wrong password!")
else:
print("Welcome, "+str(username)+"!")
Now a detailed version:
# inputs
username = input("Username: ") # asks user for the username
password = input("Password: ") # asks user for the password
# variables
admin_username = "Mr.ADMIN" # setting the admin username
admin_password = "[email protected]" # setting the admin passsword
# conditionals
if username == admin_username: # if the user entered the exact admin username
if password == admin_password: # if the user enters the exact and correct admin password
print("Welcome Admin! You are the best!") # a welcome message only to the admin
else: # if the user gets the admin password wrong
print("Error! Wrong password!") # an error message appears
else: # if the user enters something different than the admin username
print("Welcome, general user "+str(username)+"!") # a welcome message only for general users
Output:
An option:
Username: Mr.ADMINPassword: i dont knowError! Wrong password!
Another option:
Username: Mr.ADMINPassword: [email protected]Welcome Admin! You are the best!
Final option:
Username: BobPassword: Chee$eWelcome, general user Bob!
A bit of lists
A list is a collection which is ordered and changeable. They are written with square braquets: []
meat = ["beef", "lamb", "chicken"]
print(meat)
Output:
['beef', 'lamb', 'chicken']
You can access specific items of the list with the index number. Now here is the kinda tricky part. Indexes start at 0, meaning that the first item of the list has an index of 0, the second item has an index of 1, the third item has an index of 2, etc.
meat = ["beef", "lamb", "chicken"]
# Index: 0 1 2 etc.
print(meat[2]) # will output "chicken" because it is at index 2
You can also use negative indexing: index -1 means the last item, index -2 means the second to last item, etc.
meat = ["beef", "lamb", "chicken"]
# Index: -3 -2 -1 etc.
print(meat[-3]) # will output "beef" because it is at index -3
You can add items in the list using append():
meat = ["beef", "lamb", "chicken"]
meat.append("pork")
print(meat)
Output:
['beef', 'lamb', 'chicken', 'pork']
"pork" will be added at the end of the list.
For removing items in the list, use remove():
meat = ['beef', 'lamb', 'chicken']
meat.remove("lamb")
print(meat)
Output:
['beef', 'chicken']
You can also use del to remove items at a specific index:
meat = ['beef', 'lamb', 'chicken']
del meat[0]
print(meat)
Output:
['lamb', 'chicken']
There are also many other things you can do with lists, check out this: https://www.w3schools.com/python/python_lists.asp for more info!
for loops
A for loop is used for iterating over a sequence. Basically, it runs a piece of code for a specific number of times.
For example:
for i in range(5):
print("Hello!")
Output:
Hello!Hello!Hello!Hello!Hello!
You can also use the for loop to print each item in a list (using the list from above):
meat = ['beef', 'lamb', 'chicken']
for i in meat:
print(i)
Output:
beeflambchicken
while loops
while loops will run a piece of code as long as the condition is True.
For example:
x = 1 # sets x to 1
while x <= 10: # will repeat 10 times
print(x) # prints x
x += 1 # increments (adds 1) to x
Ouput:
12345678910
You can also make while loops go on for infinity, like so (useful for spamming lol):
while True:
print("Theres no stopping me nowwwww!")
Output:
Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!Theres no stopping me nowwwww!# etc until infinity
Functions
Functions are a group of code that will only execute when it is called.
For example, instead having to type a piece of code several times, you can use a function to put that piece of code inside, and then when you need to use it, you can just call it.
def greeting(): # defining the function
print("Bonjour!") # everything that is indented will be executed when the function is called
greeting() # calling the function
# you can now call this function when you want, instead of always writing the same code everytime
Output:
Bonjour!
return and arguments
The return statement is used in function. It ends the function and "returns" the result, i.e. the value of the expression following the return keyword, to the caller. It is not mandatory; you don't have to use it.
You can also have arguments inside a functions. This allows you to change the function values. The arguments are in the parenthesis.
For example:
def sum(x, y): # x and y are the arguments
total = x + y
return total # "assigns" x + y to the function
result = sum(4, 5) # you can change those to what you want
print(result) # this will output 9, because 4+5 = 9
Imports
time
You can use time in your Python programs.
How to make the program wait:
# first import time
import time
print("Hello!")
# then for the program to wait
time.sleep(1) # write how long you want to wait (in seconds) in the parenthesis
print("Bye!")
Output:
Hello!# program will wait 1 secondBye!
You can also do this (more simpler):
import time
from time import sleep
# instead of time.sleep(), do sleep()
# its the same
print("time.sleep(1)...")
time.sleep(1)
print("...is the same as...")
sleep(1)
print("sleep(1)!")
random
You can use the random module to randomly pick numbers with randint():
# remember to import!
import random
from random import randint
rand_num = randint(1,5)
# this will output a random number between 1 and 5 inclusive!
# this means the possible numbers are 1, 2, 3, 4, or 5
The reason I am precising this is because you can also use randrange():
import random
from random import randrange
rand_num = randrange(1,5)
# this will output a random number between 1 inclusive and 5 NON-inclusive (or 4 inclusive)!
# this means the possible numbers are 1, 2, 3, or 4
You can also randomly pick an item from a list with choice():
import random
from random import choice
meat = ["beef", "lamb", "chicken"]
rand_meat = choice(meat)
print(rand_meat)
# this will output a randomly chosen item of the list meat
# the possible outcomes are beef, lamb, or chicken.
math
First, you already have some functions already built in Python: min() and max(). They return the smallest and biggest value of ints inside the parenthesis, respectively.
For example:
list_a = min(18, 12, 14, 16)
list_b = max(17, 19, 15, 13)
print(list_a) # will output 12
print(list_b) # will output 19
Now for some more modules:
You can use math.floor() and math.ceil() to round up numbers to the nearest or highest int.
For example:
# first import
import math
num_a = math.floor(2.3)
num_b = math.ceil(2.3)
print(num_a) # will output 2
print(num_b) # will output 3
Explanation (from Andrew Sutherland's course): So math.floor() will round up 2.3 to the nearest lowest int, which in this case is 2. This is because, if you imagine it, the floor is on the bottom, so thats why it will round the number to the nearest lowest int.
Vice-versa for math.ceil(); it will round up 2.3 to the nearest highest int, which in this case is 3. This is because ceil is short for ceiling (programmers like to shorten words), and the ceiling is high.
You can also get pi Ï:
import math
pi = math.pi
print(pi)
Output:
3.141592653589793
Here is the full list of all the things you can do with math: https://www.w3schools.com/python/module_math.asp
Small Programs You Can Use
Countdown Program:
# imports
import time
from time import sleep
def countdown(): # making a function for the countdown (so you can use it several times)
count = int(input("Countdown from what? ")) # asks user how long the countdown
while count >= 0: # will repeat until count = 0
print(count) # prints where the countdown is at
count -= 1 # subtracts 1 from count
sleep(1) # program waits 1 second before continuing
print("End of countdown!") # message after the countdown
countdown() # remember to call the function or nothing will happen
Output:
Countdown from what? 5543210End of countdown!
Simple Calculator
First way using eval()
calculation = input("Type your calculation: ") # asks the user for a calculation.
print("Answer to " + str(calculation) + ": " + str(eval(calculation)))
# eval basically does the operation, like on a normal calculator.
# however, if you write something different than a valid operaion, there will be an error.
Or another way, using several conditionals, and you can only do "something" + "something" (but with the operators):
def calculator(): # making a function to hold all the code for calculator
while True: # loops forever so you can make several calculations without having to press run again
first_num = int(input("Enter 1st number: ")) # asks user for 1st number
second_num = int(input("Enter 2nd number: ")) # asks user for 2nd number
operator = input("Select operator: + - * / ** // ") # asks user for operator
if operator == "+": # addition
answer = first_num + second_num
print(answer)
elif operator == "-": # subtraction
answer = first_num - second_num
print(answer)
elif operator == "*": # multiplication
answer = first_num * second_num
print(answer)
elif operator == "/": # division
answer = first_num / second_num
print(answer)
elif operator == "**": # exponentiation ("to the power of")
answer = first_num ** second_num
print(answer)
elif operator == "//": # floor division
answer = first_num // second_num
print(answer)
else: # if user selects an invalid operator
print("Invalid!")
calculator() # calls the function
But obviously that is pretty long and full of many if/elif.
Some functions that are useful:
"Press ENTER to continue" Prompt:
def enter():
input("Press ENTER to continue! ")
# this is useful for text based adventure games; when they finish reading some text, they can press ENTER and the next part will follow.
# just call the function where you need it
Spacing in between lines function:
def space():
print()
print()
# same as pressing ENTER twice, this is useful to make your text a bit more airy, makes it less compact and block like.
Slowprint:
# first imports:
import time, sys
from time import sleep
def sp(str):
for letter in str:
sys.stdout.write(letter)
sys.stdout.flush()
time.sleep(0.06)
print()
# to use it:
sp("Hello there!")
# this will output Hello There! one letter every 0.06 seconds, making it look like the typewriter effect.
ANSI Escape Codes
ANSI escape codes are for controlling text in the console. You can use it to make what is in the output nicer for the user.
For example, you can use \n for a new line:
name = input("Enter your name\n>>> ")
Output:
Enter your name>>>
This makes it look nice, you can start typing on the little prompt arrows >>>.
You can also use \t for tab:
print("Hello\tdude")
Output:
Hello dude
\v for vertical tab:
print("Hello\vdude")
Output:
Hello dude
You can also have colors in python:
# the ANSI codes are stored in variables, making them easier to use
black = "\033[0;30m"
red = "\033[0;31m"
green = "\033[0;32m"
yellow = "\033[0;33m"
blue = "\033[0;34m"
magenta = "\033[0;35m"
cyan = "\033[0;36m"
white = "\033[0;37m"
bright_black = "\033[0;90m"
bright_red = "\033[0;91m"
bright_green = "\033[0;92m"
bright_yellow = "\033[0;93m"
bright_blue = "\033[0;94m"
bright_magenta = "\033[0;95m"
bright_cyan = "\033[0;96m"
bright_white = "\033[0;97m"
# to use them:
print(red+"Hello")
# you can also have multiple colors:
print(red+"Hel"+bright_blue+"lo")
# and you can even use it with the slowPrint I mentioned earlier!
Output:
And you can have underline and italic:
reset = "\u001b[0m"
underline = "\033[4m"
italic = "\033[3m"
# to use it:
print(italic+"Hello "+reset+" there "+underline+"Mister!")
# the reset is for taking away all changes you've made to the text
# it makes the text back to the default color and text decorations.
Output:
Links: Sources and Good Websites
Sources:
Always good to use a bit of help from here and there!
W3 Schools: https://www.w3schools.com/python/default.asp
Wikipedia: https://en.wikipedia.org/wiki/Guido_van_Rossum
Wikipedia: https://en.wikipedia.org/wiki/ANSI_escape_code
https://www.python-course.eu/python3_functions.php#:~:text=A%20return%20statement%20ends%20the,special%20value%20None%20is%20returned.
Good Websites you can use:
Official website: https://www.python.org/
W3 Schools: https://www.w3schools.com/python/default.asp
https://www.tutorialspoint.com/python/index.htm
https://realpython.com/
Interactive:
Goodbye World!: End
Well, I guess this is the end. I hope y'all have learnt something new/interesting! If you have any questions, please comment and I will try my best to answer them. |
In this article I will walk you through everything you need to know to connect Python and SQL.
You'll learn how to pull data from relational databases straight into your machine learning pipelines, store data from your Python application in a database of your own, or whatever other use case you might come up with.
Together we will cover:
Why learn how to use Python and SQL together?
How to set up your Python environment and MySQL Server
Connecting to MySQL Server in Python
Creating a new Database
Creating Tables and Table Relationships
Populating Tables with Data
Reading Data
Updating Records
Deleting Records
Creating Records from Python Lists
Creating re-usable functions to do all of this for us in the future
That is a lot of very useful and very cool stuff. Let's get into it!
A quick note before we start: there is a Jupyter Notebook containing all the code used in this tutorial available in this GitHub repository. Coding along is highly recommended!
The database and SQL code used here is all from my previous Introduction to SQL series posted on Towards Data Science (contact me if you have any problems viewing the articles and I can send you a link to see them for free).
If you are not familiar with SQL and the concepts behind relational databases, I would point you towards that series (plus there is of course a huge amount of great stuff available here on freeCodeCamp!)
Why Python with SQL?
For Data Analysts and Data Scientists, Python has many advantages. A huge range of open-source libraries make it an incredibly useful tool for any Data Analyst.
With its (relatively) easy learning curve and versatility, it's no wonder that Python is one of the fastest-growing programming languages out there.
So if we're using Python for data analysis, it's worth asking - where does all this data come from?
While there is a massive variety of sources for datasets, in many cases - particularly in enterprise businesses - data is going to be stored in a relational database. Relational databases are an extremely efficient, powerful and widely-used way to create, read, update and delete data of all kinds.
The most widely used relational database management systems (RDBMSs) - Oracle, MySQL, Microsoft SQL Server, PostgreSQL, IBM DB2 - all use the Structured Query Language (SQL) to access and make changes to the data.
Note that each RDBMS uses a slightly different flavour of SQL, so SQL code written for one will usually not work in another without (normally fairly minor) modifications. But the concepts, structures and operations are largely identical.
This means for a working Data Analyst, a strong understanding of SQL is hugely important. Knowing how to use Python and SQL together will give you even more of an advantage when it comes to working with your data.
The rest of this article will be devoted to showing you exactly how we can do that.
Getting Started
Requirements & Installation
To code along with this tutorial, you will need your own Python environment set up.
We will be using MySQL Community Server as it is free and widely used in the industry. If you are using Windows, this guide will help you get set up. Here are guides for Mac and Linux users too (although it may vary by Linux distribution).
Once you have those set up, we will need to get them to communicate with each other.
pip install mysql-connector-python
We are also going to be using pandas, so make sure that you have that installed as well.
pip install pandas
Importing Libraries
As with every project in Python, the very first thing we want to do is import our libraries.
It is best practice to import all the libraries we are going to use at the beginning of the project, so people reading or reviewing our code know roughly what is coming up so there are no surprises.
import mysql.connector
from mysql.connector import Error
import pandas as pd
We import the Error function separately so that we have easy access to it for our functions.
Connecting to MySQL Server
By this point we should have MySQL Community Server set up on our system. Now we need to write some code in Python that lets us establish a connection to that server.
Creating a re-usable function for code like this is best practice, so that we can use this again and again with minimum effort. Once this is written once you can re-use it in all of your projects in the future too, so future-you will be grateful!
Let's go through this line by line so we understand what's happening here:
The first line is us naming the function (create_server_connection) and naming the arguments that that function will take (host_name, user_name and user_password).
The next line closes any existing connections so that the server doesn't become confused with multiple open connections.
Next we use a Python try-except block to handle any potential errors. The first part tries to create a connection to the server using the mysql.connector.connect() method using the details specified by the user in the arguments. If this works, the function prints a happy little success message.
The except part of the block prints the error which MySQL Server returns, in the unfortunate circumstance that there is an error.
Finally, if the connection is successful, the function returns a connection object.
We use this in practice by assigning the output of the function to a variable, which then becomes our connection object. We can then apply other methods (such as cursor) to it and create other useful objects.
This should produce a success message:
Creating a New Database
Now that we have established a connection, our next step is to create a new database on our server.
In this tutorial we will do this only once, but again we will write this as a re-usable function so we have a nice useful function we can re-use for future projects.
def create_database(connection, query):
cursor = connection.cursor()
try:
cursor.execute(query)
print("Database created successfully")
except Error as err:
print(f"Error: '{err}'")
This function takes two arguments, connection (our connection object) and query (a SQL query which we will write in the next step). It executes the query in the server via the connection.
We use the cursor method on our connection object to create a cursor object (MySQL Connector uses an object-oriented programming paradigm, so there are lots of objects inheriting properties from parent objects).
If it helps, we can think of the cursor object as providing us access to the blinking cursor in a MySQL Server terminal window.
Next we define a query to create the database and call the function:
All the SQL queries used in this tutorial are explained in my Introduction to SQL tutorial series, and the full code can be found in the associated Jupyter Notebook in this GitHub repository, so I will not be providing explanations of what the SQL code does in this tutorial.
This is perhaps the simplest SQL query possible, though. If you can read English you can probably work out what it does!
Running the create_database function with the arguments as above results in a database called 'school' being created in our server.
Why is our database called 'school'? Perhaps now would be a good time to look in more detail at exactly what we are going to implement in this tutorial.
Our Database
Following the example in my previous series, we are going to be implementing the database for the International Language School - a fictional language training school which provides professional language lessons to corporate clients.
This Entity Relationship Diagram (ERD) lays out our entities (Teacher, Client, Course and Participant) and defines the relationships between them.
All the information regarding what an ERD is and what to consider when creating one and designing a database can be found in this article.
The raw SQL code, database requirements, and data to go into the database is all contained in this GitHub repository, but you'll see it all as we go through this tutorial too.
Connecting to the Database
Now that we have created a database in MySQL Server, we can modify our create_server_connection function to connect directly to this database.
Note that it's possible - common, in fact - to have multiple databases on one MySQL Server, so we want to always and automatically connect to the database we're interested in.
We can do this like so:
def create_db_connection(host_name, user_name, user_password, db_name):
connection = None
try:
connection = mysql.connector.connect(
host=host_name,
user=user_name,
passwd=user_password,
database=db_name
)
print("MySQL Database connection successful")
except Error as err:
print(f"Error: '{err}'")
return connection
This is the exact same function, but now we take one more argument - the database name - and pass that as an argument to the connect() method.
Creating a Query Execution Function
The final function we're going to create (for now) is an extremely vital one - a query execution function. This is going to take our SQL queries, stored in Python as strings, and pass them to the cursor.execute() method to execute them on the server.
def execute_query(connection, query):
cursor = connection.cursor()
try:
cursor.execute(query)
connection.commit()
print("Query successful")
except Error as err:
print(f"Error: '{err}'")
This function is exactly the same as our create_database function from earlier, except that it uses the connection.commit() method to make sure that the commands detailed in our SQL queries are implemented.
This is going to be our workhorse function, which we will use (alongside create_db_connection) to create tables, establish relationships between those tables, populate the tables with data, and update and delete records in our database.
If you're a SQL expert, this function will let you execute any and all of the complex commands and queries you might have lying around, directly from a Python script. This can be a very powerful tool for managing your data.
Creating Tables
Now we're all set to start running SQL commands into our Server and to start building our database. The first thing we want to do is to create the necessary tables.
Let's start with our Teacher table:
create_teacher_table = """
CREATE TABLE teacher (
teacher_id INT PRIMARY KEY,
first_name VARCHAR(40) NOT NULL,
last_name VARCHAR(40) NOT NULL,
language_1 VARCHAR(3) NOT NULL,
language_2 VARCHAR(3),
dob DATE,
tax_id INT UNIQUE,
phone_no VARCHAR(20)
);
"""
connection = create_db_connection("localhost", "root", pw, db) # Connect to the Database
execute_query(connection, create_teacher_table) # Execute our defined query
First of all we assign our SQL command (explained in detail here) to a variable with an appropriate name.
In this case we use Python's triple quote notation for multi-line strings to store our SQL query, then we feed it into our execute_query function to implement it.
Note that this multi-line formatting is purely for the benefit of humans reading our code. Neither SQL nor Python 'care' if the SQL command is spread out like this. So long as the syntax is correct, both languages will accept it.
For the benefit of humans who will read your code, however, (even if that will only be future-you!) it is very useful to do this to make the code more readable and understandable.
The same is true for the CAPITALISATION of operators in SQL. This is a widely-used convention that is strongly recommended, but the actual software that runs the code is case-insensitive and will treat 'CREATE TABLE teacher' and 'create table teacher' as identical commands.
Running this code gives us our success messages. We can also verify this in the MySQL Server Command Line Client:
Great! Now let's create the remaining tables.
create_client_table = """
CREATE TABLE client (
client_id INT PRIMARY KEY,
client_name VARCHAR(40) NOT NULL,
address VARCHAR(60) NOT NULL,
industry VARCHAR(20)
);
"""
create_participant_table = """
CREATE TABLE participant (
participant_id INT PRIMARY KEY,
first_name VARCHAR(40) NOT NULL,
last_name VARCHAR(40) NOT NULL,
phone_no VARCHAR(20),
client INT
);
"""
create_course_table = """
CREATE TABLE course (
course_id INT PRIMARY KEY,
course_name VARCHAR(40) NOT NULL,
language VARCHAR(3) NOT NULL,
level VARCHAR(2),
course_length_weeks INT,
start_date DATE,
in_school BOOLEAN,
teacher INT,
client INT
);
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, create_client_table)
execute_query(connection, create_participant_table)
execute_query(connection, create_course_table)
This creates the four tables necessary for our four entities.
Now we want to define the relationships between them and create one more table to handle the many-to-many relationship between the participant and course tables (see here for more details).
We do this in exactly the same way:
alter_participant = """
ALTER TABLE participant
ADD FOREIGN KEY(client)
REFERENCES client(client_id)
ON DELETE SET NULL;
"""
alter_course = """
ALTER TABLE course
ADD FOREIGN KEY(teacher)
REFERENCES teacher(teacher_id)
ON DELETE SET NULL;
"""
alter_course_again = """
ALTER TABLE course
ADD FOREIGN KEY(client)
REFERENCES client(client_id)
ON DELETE SET NULL;
"""
create_takescourse_table = """
CREATE TABLE takes_course (
participant_id INT,
course_id INT,
PRIMARY KEY(participant_id, course_id),
FOREIGN KEY(participant_id) REFERENCES participant(participant_id) ON DELETE CASCADE,
FOREIGN KEY(course_id) REFERENCES course(course_id) ON DELETE CASCADE
);
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, alter_participant)
execute_query(connection, alter_course)
execute_query(connection, alter_course_again)
execute_query(connection, create_takescourse_table)
Now our tables are created, along with the appropriate constraints, primary key, and foreign key relations.
Populating the Tables
The next step is to add some records to the tables. Again we use execute_query to feed our existing SQL commands into the Server. Let's again start with the Teacher table.
pop_teacher = """
INSERT INTO teacher VALUES
(1, 'James', 'Smith', 'ENG', NULL, '1985-04-20', 12345, '+491774553676'),
(2, 'Stefanie', 'Martin', 'FRA', NULL, '1970-02-17', 23456, '+491234567890'),
(3, 'Steve', 'Wang', 'MAN', 'ENG', '1990-11-12', 34567, '+447840921333'),
(4, 'Friederike', 'Müller-Rossi', 'DEU', 'ITA', '1987-07-07', 45678, '+492345678901'),
(5, 'Isobel', 'Ivanova', 'RUS', 'ENG', '1963-05-30', 56789, '+491772635467'),
(6, 'Niamh', 'Murphy', 'ENG', 'IRI', '1995-09-08', 67890, '+491231231232');
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, pop_teacher)
Does this work? We can check again in our MySQL Command Line Client:
Now to populate the remaining tables.
pop_client = """
INSERT INTO client VALUES
(101, 'Big Business Federation', '123 Falschungstraße, 10999 Berlin', 'NGO'),
(102, 'eCommerce GmbH', '27 Ersatz Allee, 10317 Berlin', 'Retail'),
(103, 'AutoMaker AG', '20 Künstlichstraße, 10023 Berlin', 'Auto'),
(104, 'Banko Bank', '12 Betrugstraße, 12345 Berlin', 'Banking'),
(105, 'WeMoveIt GmbH', '138 Arglistweg, 10065 Berlin', 'Logistics');
"""
pop_participant = """
INSERT INTO participant VALUES
(101, 'Marina', 'Berg','491635558182', 101),
(102, 'Andrea', 'Duerr', '49159555740', 101),
(103, 'Philipp', 'Probst', '49155555692', 102),
(104, 'René', 'Brandt', '4916355546', 102),
(105, 'Susanne', 'Shuster', '49155555779', 102),
(106, 'Christian', 'Schreiner', '49162555375', 101),
(107, 'Harry', 'Kim', '49177555633', 101),
(108, 'Jan', 'Nowak', '49151555824', 101),
(109, 'Pablo', 'Garcia', '49162555176', 101),
(110, 'Melanie', 'Dreschler', '49151555527', 103),
(111, 'Dieter', 'Durr', '49178555311', 103),
(112, 'Max', 'Mustermann', '49152555195', 104),
(113, 'Maxine', 'Mustermann', '49177555355', 104),
(114, 'Heiko', 'Fleischer', '49155555581', 105);
"""
pop_course = """
INSERT INTO course VALUES
(12, 'English for Logistics', 'ENG', 'A1', 10, '2020-02-01', TRUE, 1, 105),
(13, 'Beginner English', 'ENG', 'A2', 40, '2019-11-12', FALSE, 6, 101),
(14, 'Intermediate English', 'ENG', 'B2', 40, '2019-11-12', FALSE, 6, 101),
(15, 'Advanced English', 'ENG', 'C1', 40, '2019-11-12', FALSE, 6, 101),
(16, 'Mandarin für Autoindustrie', 'MAN', 'B1', 15, '2020-01-15', TRUE, 3, 103),
(17, 'Français intermédiaire', 'FRA', 'B1', 18, '2020-04-03', FALSE, 2, 101),
(18, 'Deutsch für Anfänger', 'DEU', 'A2', 8, '2020-02-14', TRUE, 4, 102),
(19, 'Intermediate English', 'ENG', 'B2', 10, '2020-03-29', FALSE, 1, 104),
(20, 'Fortgeschrittenes Russisch', 'RUS', 'C1', 4, '2020-04-08', FALSE, 5, 103);
"""
pop_takescourse = """
INSERT INTO takes_course VALUES
(101, 15),
(101, 17),
(102, 17),
(103, 18),
(104, 18),
(105, 18),
(106, 13),
(107, 13),
(108, 13),
(109, 14),
(109, 15),
(110, 16),
(110, 20),
(111, 16),
(114, 12),
(112, 19),
(113, 19);
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, pop_client)
execute_query(connection, pop_participant)
execute_query(connection, pop_course)
execute_query(connection, pop_takescourse)
Amazing! Now we have created a database complete with relations, constraints and records in MySQL, using nothing but Python commands.
We have gone through this step by step to keep it understandable. But by this point you can see that this could all very easily be written into one Python script and executed in one command in the terminal. Powerful stuff.
Reading Data
Now we have a functional database to work with. As a Data Analyst, you are likely to come into contact with existing databases in the organisations where you work. It will be very useful to know how to pull data out of those databases so it can then be fed into your python data pipeline. This is what we are going to work on next.
def read_query(connection, query):
cursor = connection.cursor()
result = None
try:
cursor.execute(query)
result = cursor.fetchall()
return result
except Error as err:
print(f"Error: '{err}'")
Again, we are going to implement this in a very similar way to execute_query. Let's try it out with a simple query to see how it works.
q1 = """
SELECT *
FROM teacher;
"""
connection = create_db_connection("localhost", "root", pw, db)
results = read_query(connection, q1)
for result in results:
print(result)
Exactly what we are expecting. The function also works with more complex queries, such as this one involving a JOIN on the course and client tables.
q5 = """
SELECT course.course_id, course.course_name, course.language, client.client_name, client.address
FROM course
JOIN client
ON course.client = client.client_id
WHERE course.in_school = FALSE;
"""
connection = create_db_connection("localhost", "root", pw, db)
results = read_query(connection, q5)
for result in results:
print(result)
Very nice.
For our data pipelines and workflows in Python, we might want to get these results in different formats to make them more useful or ready for us to manipulate.
Let's go through a couple of examples to see how we can do that.
Formatting Output into a List
#Initialise empty list
from_db = []
# Loop over the results and append them into our list
# Returns a list of tuples
for result in results:
result = result
from_db.append(result)
Formatting Output into a List of Lists
# Returns a list of lists
from_db = []
for result in results:
result = list(result)
from_db.append(result)
Formatting Output into a pandas DataFrame
For Data Analysts using Python, pandas is our beautiful and trusted old friend. It's very simple to convert the output from our database into a DataFrame, and from there the possibilities are endless!
# Returns a list of lists and then creates a pandas DataFrame
from_db = []
for result in results:
result = list(result)
from_db.append(result)
columns = ["course_id", "course_name", "language", "client_name", "address"]
df = pd.DataFrame(from_db, columns=columns)
Hopefully you can see the possibilities unfolding in front of you here. With just a few lines of code, we can easily extract all the data we can handle from the relational databases where it lives, and pull it into our state-of-the-art data analytics pipelines. This is really helpful stuff.
Updating Records
When we are maintaining a database, we will sometimes need to make changes to existing records. In this section we are going to look at how to do that.
Let's say the ILS is notified that one of its existing clients, the Big Business Federation, is moving offices to 23 Fingiertweg, 14534 Berlin. In this case, the database administrator (that's us!) will need to make some changes.
Thankfully, we can do this with our execute_query function alongside the SQL UPDATE statement.
update = """
UPDATE client
SET address = '23 Fingiertweg, 14534 Berlin'
WHERE client_id = 101;
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, update)
Note that the WHERE clause is very important here. If we run this query without the WHERE clause, then all addresses for all records in our Client table would be updated to 23 Fingiertweg. That is very much not what we are looking to do.
Also note that we used "WHERE client_id = 101" in the UPDATE query. It would also have been possible to use "WHERE client_name = 'Big Business Federation'" or "WHERE address = '123 Falschungstraße, 10999 Berlin'" or even "WHERE address LIKE '%Falschung%'".
The important thing is that the WHERE clause allows us to uniquely identify the record (or records) we want to update.
Deleting Records
It is also possible use our execute_query function to delete records, by using DELETE.
When using SQL with relational databases, we need to be careful using the DELETE operator. This isn't Windows, there is no 'Are you sure you want to delete this?' warning pop-up, and there is no recycling bin. Once we delete something, it's really gone.
With that said, we do really need to delete things sometimes. So let's take a look at that by deleting a course from our Course table.
First of all let's remind ourselves what courses we have.
Let's say course 20, 'Fortgeschrittenes Russisch' (that's 'Advanced Russian' to you and me), is coming to an end, so we need to remove it from our database.
By this stage, you will not be at all surprised with how we do this - save the SQL command as a string, then feed it into our workhorse execute_query function.
delete_course = """
DELETE FROM course
WHERE course_id = 20;
"""
connection = create_db_connection("localhost", "root", pw, db)
execute_query(connection, delete_course)
Let's check to confirm that had the intended effect:
'Advanced Russian' is gone, as we expected.
Go ahead and experiment with them, however - it doesn't matter if you delete a column or table from a database for a fictional school, and it's a good idea to become comfortable with these commands before moving into a production environment.
Oh CRUD
By this point, we are now able to complete the four major operations for persistent data storage.
We have learned how to:
Create - entirely new databases, tables and records
Read - extract data from a database, and store that data in multiple formats
Update - make changes to existing records in the database
Delete - remove records which are no longer needed
These are fantastically useful things to be able to do.
Before we finish things up here, we have one more very handy skill to learn.
Creating Records from Lists
We saw when populating our tables that we can use the SQL INSERT command in our execute_query function to insert records into our database.
Given that we're using Python to manipulate our SQL database, it would be useful to be able to take a Python data structure (such as a list) and insert that directly into our database.
This could be useful when we want to store logs of user activity on a social media app we have written in Python, or input from users into a Wiki we have built, for example. There are as many possible uses for this as you can think of.
def execute_list_query(connection, sql, val):
cursor = connection.cursor()
try:
cursor.executemany(sql, val)
connection.commit()
print("Query successful")
except Error as err:
print(f"Error: '{err}'")
Now we have the function, we need to define an SQL command ('sql') and a list containing the values we wish to enter into the database ('val'). The values must be stored as a list of tuples, which is a fairly common way to store data in Python.
To add two new teachers to the database, we can write some code like this:
sql = '''
INSERT INTO teacher (teacher_id, first_name, last_name, language_1, language_2, dob, tax_id, phone_no)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
'''
val = [
(7, 'Hank', 'Dodson', 'ENG', None, '1991-12-23', 11111, '+491772345678'),
(8, 'Sue', 'Perkins', 'MAN', 'ENG', '1976-02-02', 22222, '+491443456432')
]
Notice here that in the 'sql' code we use the '%s' as a placeholder for our value. The resemblance to the '%s' placeholder for a string in python is just coincidental (and frankly, very confusing), we want to use '%s' for all data types (strings, ints, dates, etc) with the MySQL Python Connector.
You can see a number of questions on Stackoverflow where someone has become confused and tried to use '%d' placeholders for integers because they're used to doing this in Python. This won't work here - we need to use a '%s' for each column we want to add a value to.
The executemany function then takes each tuple in our 'val' list and inserts the relevant value for that column in place of the placeholder and executes the SQL command for each tuple contained in the list.
This can be performed for multiple rows of data, so long as they are formatted correctly. In our example we will just add two new teachers, for illustrative purposes, but in principle we can add as many as we would like.
Let's go ahead and execute this query and add the teachers to our database.
connection = create_db_connection("localhost", "root", pw, db)
execute_list_query(connection, sql, val)
Welcome to the ILS, Hank and Sue!
This is yet another deeply useful function, allowing us to take data generated in our Python scripts and applications, and enter them directly into our database.
Conclusion
We have covered a lot of ground in this tutorial.
We have learned how to use Python and MySQL Connector to create an entirely new database in MySQL Server, create tables within that database, define the relationships between those tables, and populate them with data.
We have covered how to Create, Read, Update and Delete data in our database.
We have looked at how to extract data from existing databases and load them into pandas DataFrames, ready for analysis and further work taking advantage of all the possibilities offered by the PyData stack.
Going in the other direction, we have also learned how to take data generated by our Python scripts and applications, and write those into a database where they can be safely stored for later retrieval and manipulation.
I hope this tutorial has helped you to see how we can use Python and SQL together to be able to manipulate data even more effectively!
If you'd like to see more of my projects and work, please visit my website at craigdoesdata.de. If you have any feedback on this tutorial, please contact me directly - all feedback is warmly received! |
I was there the day they delivered a new ICL 2960 to our uni CS department. A few giant trucks turned up. They laid down an aluminium plate road way from the trucks into the building. Then rolled out a lot of very big sexy looking bright orange boxes, drives, processor, etc. Most impressive.
Mainframes were always large and impressive, something the credit card sized Raspberry Pi cannot compete with despite being more powerful!Heater wrote: ↑Thu Jun 13, 2019 10:00 amAwesome.
I was there the day they delivered a new ICL 2960 to our uni CS department. A few giant trucks turned up. They laid down an aluminium plate road way from the trucks into the building. Then rolled out a lot of very big sexy looking bright orange boxes, drives, processor, etc. Most impressive.
I once worked at BRA01 a large ICL building. Most of the ground floor was the machine hall for the mainframes, the largest in Europe, with a raised viewing gallery all around the edge which took a long time to walk around. Over the years more and more of the hall was converted to offices as mainframes got smaller.
https://www.icl1900.co.uk/preserve/g3ee.html
PeterO
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson
I used to work in a 21000 square foot datacentre with a large mainframe. That got replaced by a PC sitting under a desk running emulation software. The tape drives were replaced by a Unix box with a load of disks and a tape autoloader for long term storage. The banks of modems were replaced by another Unix box with an Ethernet link. The only things that stayed were the high speed printers (3000 pages in 10-15 minutes)jahboater wrote: ↑Thu Jun 13, 2019 10:08 amMainframes were always large and impressive, something the credit card sized Raspberry Pi cannot compete with despite being more powerful!Heater wrote: ↑Thu Jun 13, 2019 10:00 amAwesome.
I was there the day they delivered a new ICL 2960 to our uni CS department. A few giant trucks turned up. They laid down an aluminium plate road way from the trucks into the building. Then rolled out a lot of very big sexy looking bright orange boxes, drives, processor, etc. Most impressive.
I once worked at BRA01 a large ICL building. Most of the ground floor was the machine hall for the mainframes, the largest in Europe, with a raised viewing gallery all around the edge which took a long time to walk around. Over the years more and more of the hall was converted to offices as mainframes got smaller.
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
It's by definition.ScriptBasic wrote: ↑Thu Jun 13, 2019 1:59 pmIs t hat a guess or did you try running it and noticed the leak?
For the first, if you call something which reserves memory and don't then free that memory for re-use you are "leaking memory".
For the second, if it does just return a pointer to where 'buf' is within the function, that will have been on the stack, and may well have been overwritten or altered whenever you come to access what that pointer points to.
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
If you are talking about C then it is simple (though it may not appear so!)
Anything on the stack (arguments and local variables) is freed when a function is exited.
Anything originating from malloc (the heap) remains.
Any static data remains of course.
A array or string may have a pointer to its start that is on the stack.
Code: Select all
foo( void )
{
char *ptr = malloc(42);
}
An mpz_t is a small structure (probably) on the stack that will have pointers to the real arrays.
Of course nowadays most small local objects are held in registers, but the behavior is the same.
It's got me mystified, how one can concatenate strings and return that from a function -
Code: Select all
char * GetErrorMessage(int n)
{
char * errorMessage = ConcatenateStrings("Error ", intToString(n));
return errorMessage;
}
That looks fine!hippy wrote: ↑Thu Jun 13, 2019 3:17 pmIt's got me mystified, how one can concatenate strings and return that from a function -Probably off topic for this thread.
Code: Select all
char * GetErrorMessage(int n)
{
char * errorMessage = ConcatenateStrings("Error is: ", intToString(n));
return errorMessage;
}
"errorMessage" is a 4 or 8 byte value which is stored on the stack or more likely in a register.
You have sensibly, and safely, returned that value to be used by the caller of your function.
That's all fine, because a
copyof "errorMessage" was returned.
"errorMessage" itself will vanish of course, but who cares?
What is mega dangerous is:
Code: Select all
int * foo( void )
{
int num = 42;
int *ptr = #
return ptr;
}
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
If someone with better debugging skills than PRINT "Got Here" can verify a memory leak with my direction, it would be appreciated.
It may be time for an emulated Fibonacci roundup that includes Algol, PL/I, Basic and other languages running in emulators on the Raspberry Pi.
On the website you linked I saw a binary download for the Pi. Is the source code for the emulator included?
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
I reworked the code to clear op1, op2 and res with each call to the library.
If you thought
undefwas cute, get use to
infwhen trying to use those crazy numbers with standard SB math operators.
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
I ran the system monitor while running the fibo(78000) and the memory slowly did increase but as soon as the program ended it returned to the level it was at before running fibo. I'm wondering if that 1.5 meg buffer I'm assuming is freed when the function returns is really happening or is it something with ScriptBasic's memory manager that is resource hungry?
interface.c
Code: Select all
/* GMP Extension Module
UXLIBS: -lc -lgmp
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <gmp.h>
#include "../../basext.h"
static mpz_t op1;
static mpz_t op2;
static mpz_t res;
static void gmp_clear(void){
mpz_clear(op1);
mpz_clear(op2);
mpz_clear(res);
return 0;
}
/**************************
Extension Module Functions
**************************/
typedef struct _ModuleObject {
void *HandleArray;
}ModuleObject,*pModuleObject;
besVERSION_NEGOTIATE
return (int)INTERFACE_VERSION;
besEND
besSUB_START
pModuleObject p;
besMODULEPOINTER = besALLOC(sizeof(ModuleObject));
if( besMODULEPOINTER == NULL )return 0;
p = (pModuleObject)besMODULEPOINTER;
return 0;
besEND
besSUB_FINISH
pModuleObject p;
p = (pModuleObject)besMODULEPOINTER;
if( p == NULL )return 0;
return 0;
besEND
/*************
GMP Functions
*************/
besFUNCTION(fibo)
int fval;
besARGUMENTS("i")
&fval
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(res);
mpz_fib_ui(res, fval);
gmp_snprintf( buf,sizeof(buf),"%Zd", res );
mpz_clear(res);
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_add)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(op1);
mpz_init(op2);
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_init(res);
mpz_add(res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_sub)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(op1);
mpz_init(op2);
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_init(res);
mpz_sub (res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_mul)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(op1);
mpz_init(op2);
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_init(res);
mpz_mul (res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
sfibo.sb
Code: Select all
DECLARE SUB BI_ADD ALIAS "bi_add" LIB "gmp"
FUNCTION sfibo (n)
IF n < 2 THEN
sfibo = 1
ELSE
m = 0
p = 1
q = 0
FOR i = 2 TO n
m = BI_ADD(p, q)
q = p
p = m
NEXT i
sfibo = m
END IF
END FUNCTION
PRINT sfibo(78000),"\n"
Output
Code: Select all
jrs@jrs-laptop:~/sb/GMP$ time scriba sfibo.sb > sfibo.out
real 0m44.118s
user 0m43.093s
sys 0m0.937s
jrs@jrs-laptop:~/sb/GMP$ ls -l sfibo.out
-rw-r--r-- 1 jrs jrs 16302 Jun 13 12:23 sfibo.out
jrs@jrs-laptop:~/sb/GMP$ tail -c64 sfibo.out
840773259352868233566983589379711278754520073189001074454696000
jrs@jrs-laptop:~/sb/GMP$
By default it will print fibo(4784969), but it can take any Fibonacci index as argument. I have tested it up to fibo(1000000000) with different options (see below). With a special options combination and enough swap space I could also calculate fibo(2000000000).
Code: Select all
Usage: python fibo_final.py [options] [index]
or
python3 fibo_final.py [options] [index]
or
pypy fibo_final.py [options] [index]
Note: pypy cannot use gmpy2
index: Fibonacci number to be calculated, default = 4784969
Options:
-t don't print result, only time needed for Fibonacci calculation and string conversion
-n don't convert result to string in timing mode
-i use Python internal BIGINTS, even if gmpy2 is installed
-c cheat = use GMP's internal Fibonacci function if possible
-h, --help print this text
Code: Select all
pi@raspberrypi4:~ $ time python fibo_final.py | tail -c 324856539211500699706378405156269real 0m1,795suser 0m1,732ssys 0m0,042spi@raspberrypi4:~ $ python fibo_final.py -tFibonacci calculation took 0.42829990387 secondsString conversion took 1.28486704826 secondsFibonacci Number has 1000000 digitsFibonacci calculation needed 59 recursive calculationspi@raspberrypi4:~ $ python3 fibo_final.py -t -cFibonacci calculation took 0.26346707344055176 secondsString conversion took 1.265275478363037 secondsFibonacci Number has 1000000 digitspi@raspberrypi4:~ $ python fibo_final.py -t 10000000Fibonacci calculation took 0.814356803894 secondsString conversion took 3.36349606514 secondsFibonacci Number has 2089877 digitsFibonacci calculation needed 56 recursive calculationspi@raspberrypi4:~ $ pypy fibo_final.py -tFibonacci calculation took 6.37379097939 secondsString conversion took 75.8342139721 secondsFibonacci Number has 1000000 digitsFibonacci calculation needed 59 recursive calculationspi@raspberrypi4:~ $ python3 fibo_final.py -t -i -nFibonacci calculation took 10.611706018447876 secondsFibonacci calculation needed 59 recursive calculations
Code: Select all
## Fibonacci challenge script fibo_final.py
import time, sys
use_gmp = True
use_gmp_fibo = False
timing_only = False
string_conversion = True
fibs = {0:0, 1:1, 2:1}
index = 4784969
usage = '''Usage: python fibo_final.py [options] [index]
or
python3 fibo_final.py [options] [index]
or
pypy fibo_final.py [options] [index]
Note: pypy cannot use gmpy2
index: Fibonacci number to be calculated, default = 4784969
Options:
-t don't print result, only timing for Fibonacci calculation and string conversion
-n don't convert result to string in timing mode
-i use Python internal BIGINTS, even if gmpy2 is installed
-c cheat = use GMP's internal Fibonacci function if possible
-h, --help print this text
'''
def fibo(n):
if n in fibs:
return fibs[n]
k = (n + 1) // 2
fk = fibo(k)
fk1 = fibo(k - 1)
if n & 1:
result = fk ** 2 + fk1 ** 2
else:
result = (2 * fk1 + fk) * fk
fibs[n] = result
return result
if len(sys.argv) > 1:
for arg in sys.argv[1:]:
if arg in ['-h','--help']:
print(usage)
sys.exit(0)
if arg in ['-t', '-i', '-c', '-n']:
if arg == '-t':
timing_only = True
elif arg == '-i':
use_gmp = False
elif arg == '-c':
use_gmp_fibo = True
elif arg == '-n':
string_conversion = False
else:
try:
index = int(arg)
except:
pass
if use_gmp:
try:
import gmpy2
from gmpy2 import mpz
fibs = {0:mpz(0), 1:mpz(1), 2:mpz(1)}
except:
use_gmp = False
use_gmp_fibo = False
if timing_only:
if use_gmp and use_gmp_fibo:
t = time.time()
res = gmpy2.fib(index)
fibt = time.time()-t
else:
t = time.time()
res = fibo(index)
fibt = time.time()-t
if string_conversion:
t = time.time()
restr = str(res)
strcvt = time.time()-t
print('Fibonacci('+str(index)+') calculation took '+str(fibt)+' seconds')
if string_conversion:
print('String conversion took '+str(strcvt)+' seconds')
print('Fibonacci Number has '+str(len(restr))+' digits')
if not use_gmp_fibo:
print(str(len(fibs)-3) + ' Fibonacci numbers have been calculated')
else:
if use_gmp_fibo:
print(gmpy2.fib(index))
else:
print(fibo(index))
Slim, fast webkit browser with support for audio+video+playlists+youtube+pdf+download
Optional fullscreen kiosk mode and command interface for embedded applications
Includes omxplayerGUI, an X front end for omxplayer
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
Can someone suggest what GMP
dividefunction would be best. There seems to be a few options for the operator.
Code: Select all
/* GMP Extension Module
UXLIBS: -lc -lgmp
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <gmp.h>
#include "../../basext.h"
static mpz_t op1;
static mpz_t op2;
static mpz_t res;
static void gmp_clear(void){
mpz_clear(op1);
mpz_clear(op2);
mpz_clear(res);
return 0;
}
static void gmp_init(void){
mpz_init(op1);
mpz_init(op2);
mpz_init(res);
return 0;
}
/**************************
Extension Module Functions
**************************/
typedef struct _ModuleObject {
void *HandleArray;
}ModuleObject,*pModuleObject;
besVERSION_NEGOTIATE
return (int)INTERFACE_VERSION;
besEND
besSUB_START
pModuleObject p;
besMODULEPOINTER = besALLOC(sizeof(ModuleObject));
if( besMODULEPOINTER == NULL )return 0;
p = (pModuleObject)besMODULEPOINTER;
return 0;
besEND
besSUB_FINISH
pModuleObject p;
p = (pModuleObject)besMODULEPOINTER;
if( p == NULL )return 0;
return 0;
besEND
/*************
GMP Functions
*************/
besFUNCTION(fibo)
int fval;
besARGUMENTS("i")
&fval
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(res);
mpz_fib_ui(res, fval);
gmp_snprintf( buf,sizeof(buf),"%Zd", res );
mpz_clear(res);
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_add)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_add(res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_sub)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_sub (res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_mul)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
char buf[1500000];
memset(buf,0,1);
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_mul (res, op1, op2);
gmp_snprintf(buf, sizeof(buf), "%Zd", res);
gmp_clear();
besRETURN_STRING(buf);
besEND
I suggest the "tdiv" functions for truncating integer division (that is, they truncate towards zero like normal hardware integer divide works). mpz_tdiv_q() etc.
7/2 == 3 and -7/2 == -3
4/5 == 0 and -4/5 == 0
As it says, the q functions calculate only the quotient, the r functions only the remainder, and the qr functions calculate both.
The "fdiv" versions do floor division which is what the Python3 // operator does.
7/3 == 2 and -7/3 == -3
I just noticed, you are using the schoolboy's Fibonacci algorithm rather than any optimized method. That will be an orders of magnitude slower and is only exercising BI_ADD.
It's time to step up the game and use a better fibo algorithm. Perhaps the simplest we have so far is my original effort which as JS looks like this:
Code: Select all
function isEven(n) {
return (n & 1) === 0;
}
let memo = [BigInt(0), BigInt(1), BigInt(1)]
//
// This is a fast and big Fibonacci number calculator based on the suggestions here:
// https://www.nayuki.io/page/fast-fibonacci-algorithms
//
function fibo (n) {
if (typeof memo[n] != 'undefined') {
return memo[n]
}
let k = Math.floor(n / 2)
let a = fibo(k);
let b = fibo(k + 1);
if (isEven(n)) {
return memo[n] = a * ((b * 2n) - a)
}
return memo[n] = a ** 2n + b ** 2n
}
We have increasingly more complicated fibos that will perhaps double that performance but this is a good place to start.
You are so close to the fibo challenge solution, I'd love to see it working!
Edit: Fixed the code !
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
interface.c
Code: Select all
/* GMP Extension Module
UXLIBS: -lc -lgmp
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <gmp.h>
#include "../../basext.h"
static mpz_t op1;
static mpz_t op2;
static mpz_t res;
static void gmp_clear(void){
mpz_clear(op1);
mpz_clear(op2);
mpz_clear(res);
}
static void gmp_init(void){
mpz_init(op1);
mpz_init(op2);
mpz_init(res);
}
/**************************
Extension Module Functions
**************************/
typedef struct _ModuleObject {
void *HandleArray;
}ModuleObject,*pModuleObject;
besVERSION_NEGOTIATE
return (int)INTERFACE_VERSION;
besEND
besSUB_START
pModuleObject p;
besMODULEPOINTER = besALLOC(sizeof(ModuleObject));
if( besMODULEPOINTER == NULL )return 0;
p = (pModuleObject)besMODULEPOINTER;
return 0;
besEND
besSUB_FINISH
pModuleObject p;
p = (pModuleObject)besMODULEPOINTER;
if( p == NULL )return 0;
return 0;
besEND
/*************
GMP Functions
*************/
besFUNCTION(fibo)
int fval;
besARGUMENTS("i")
&fval
besARGEND
char buf[1500000];
memset(buf,0,1);
mpz_init(res);
mpz_fib_ui(res, fval);
gmp_snprintf( buf,sizeof(buf),"%Zd", res );
mpz_clear(res);
besRETURN_STRING(buf);
besEND
besFUNCTION(bi_add)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_add(res, op1, op2);
char* res_string = mpz_get_str (0, 10, res);
besSET_RETURN_STRING(res_string);
gmp_clear();
free(res_string);
besEND
besFUNCTION(bi_sub)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_sub (res, op1, op2);
char* res_string = mpz_get_str (0, 10, res);
besSET_RETURN_STRING(res_string);
gmp_clear();
free(res_string);
besEND
besFUNCTION(bi_mul)
const char* s1;
const char* s2;
besARGUMENTS("zz")
&s1, &s2
besARGEND
gmp_init();
mpz_set_str(op1, s1, 10);
mpz_set_str(op2, s2, 10);
mpz_mul (res, op1, op2);
char* res_string = mpz_get_str (0, 10, res);
besSET_RETURN_STRING(res_string);
gmp_clear();
free(res_string);
besEND
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
Code: Select all
DECLARE SUB BI_ADD ALIAS "bi_add" LIB "gmp"
DECLARE SUB BI_SUB ALIAS "bi_sub" LIB "gmp"
DECLARE SUB BI_MUL ALIAS "bi_mul" LIB "gmp"
a = 123456789
b = 987654321
c = a * b
PRINT c,"\n"
PRINT BI_MUL(c, STRREVERSE(c)),"\n"
Output
Code: Select all
jrs@jrs-laptop:~/sb/GMP$ scriba gmpmul.sb
121932631112635269
117364572765028660532228440042158549
jrs@jrs-laptop:~/sb/GMP$
That is cool an all. I guess we may not have commented on this use of numbers and strings together if it were not a regular thing we can do in man other languages .Why I like this GMP interface.
For example in my big integer class I define the operators +, -, *, << that work on numbers stored as big arrays of integers. It would be a trivial task to define the operators such that they can also work on combinations of big integers and strings, big integers and regular ints, or potentially even just regular ints and strings.
Many programmers hate this kind of sloppy typing though. They would much prefer to have strict type checking and require the user explicitly do the type conversions in their code.
Famously many "real" programmers laugh at Javascript for doing the following:
Notice how JS gives the "wrong" answer for 1111 + "1111" and "1111" + 1111. Well of course, the + operator is used as a nice simple string concatenation operator, so that's what it does here having converted the numbers to strings. Conversely -, *, / don't make any obvious sense for strings so they are used as maths operations after converting the strings to numbers.> 1111 + 1111
2222
> 1111 - 1111
0
> 1111 * 1111
1234321
> 1111 / 1111
1
> 1111 + "1111"
'11111111'
> 1111 - "1111"
0
> 1111 * "1111"
1234321
> 1111 / "1111"
1
> "1111" + 1111
'11111111'
> "1111" - 1111
0
> "1111" * 1111
1234321
> "1111" / 1111
1
> "1111" + "1111"
'11111111'
> "1111" - "1111"
0
> "1111" * "1111"
1234321
> "1111" / "1111"
1
It all makes perfect sense. Sometimes "real" programmers are full of nonsense.
Why I don't like this GMP interface:
Code: Select all
mpz_add(res, op1, op2);
char* res_string = mpz_get_str (0, 10, res);
besSET_RETURN_STRING(res_string);
gmp_clear();
free(res_string);
That a lot of redundant memory allocation and copying going on.
> 1111 + "1111"
'11111111'
> 1111 - "1111"
0
because of its inconsistency. "-" does arithmetic "+" does string concatenation
one operator converts the number to a string the other converts a string to a number.
There are many C++ interfaces for GMP, I see there is now one provided by the GMP library itself
https://gmplib.org/manual/C_002b_002b-C ... -Interface
Allowing stuff like
Code: Select all
int
main (void)
{
mpz_class a, b, c;
a = 1234;
b = "-5678";
c = a+b;
cout << "sum is " << c << "\n";
cout << "absolute value is " << abs(c) << "\n";
return 0;
}
John_Spikowski
Posts:1614
Joined:Wed Apr 03, 2019 5:53 pm
Location:Anacortes, WA USA
Contact:Website Twitter
The price of seamless integration.,That a lot of redundant memory allocation and copying going on.
Keep in mind the GMP C library has to interface with ScriptBasic's thread safe (default mode) memory manager. Rarely do you interface memory directly and must use SB's API interface. Luckily Peter wrote a high level macro interface to hide his object pointer madness.
You can compile / run ScriptBasic in single threaded mode which doesn't use the memory manager and allows a more direct C level interface. .(uses malloc instead)
The C++ integration which looks very seamless, doesn't do all that (see the example and link in my previous post).
The problem is, as always, mixing two languages where one language has a richer choice of data types than the other. |
原文作者:Armin Ronacher
原文链接:http://lucumr.pocoo.org/2015/11/18/pythons-hidden-re-gems/
Python 标准库中有很多非常恶心的模块,但 Python 的re模块不是其中之一。虽然它已经很老了而且多年未更新,它仍是我认为的众多动态语言中最好的(正则表达式模块)。
对这个模块,我经常能发现有趣的东西。Python 是少有的几个,本身没有集成正则表达式的动态语言之一。虽然缺少解释器的语法支持,但从纯粹的 API 角度来说,它弥补了核心系统设计的缺憾。而同时它又非常奇特。比如它的解析器是用纯 Python 实现的,你如果去追踪它的导入过程,会发现一些奇怪的事:它把 90%的时间都花在一个re的支持模块上了。
Python 的正则表达式模块很早就存在标准库之中了。先不说 Python 3,从它有的那天起,除了中途加入了 unicode 的基础支持,就基本没变过了。直到今天(译注:本文作于 2015.11.8),它的成员枚举还是错的(对一个正则表达式的 pattern 对象使用dir()看看)。
然而,老模块的好处是不同的 Python 版本都一样,非常可靠。我从未因为正则表达式模块的改动而调整任何东西。对于我这种要写很多正则表达式的人来说,这是个好消息。
它的设计中有个有趣的特点:它的解析器和编译器是用 Python 写的,而匹配器是用 C 写的。只要你想,你能跳过正则解析,直接把解析器的内部结构传给编译器。这没有包含在文档中,但这是可行的。
除此之外,正则表达式系统中还有很多东西未见于文档或文档不足。所以我希望给大家举例说明为什么 Python 的正则表达式模块这么酷。
毫无疑问,Python 正则表达式系统的最强特性之一,就是它严格区分匹配和搜索。这在其他正则表达式引擎中并不多见。具体来说,你在进行匹配时能提供一个索引值作为偏移量,匹配将基于该位置进行。
具体地,这意味着你能做类似下面的事情:
>>> pattern = re.compile('bar') >>> string = 'foobar' >>> pattern.match(string) is None True >>> pattern.match(string, 3) <_sre.SRE_Match object at 0x103c9a510>
这极大地有助于实现一个语法分析器,因为你能继续使用^来标明字符串的起始位置,只需要增加索引值就可以进行后续的匹配。这也意味着我们不需要自己对字符串进行切片,节省了大量内存开销和字符串拷贝操作(Python 对此并不是特别在行)。
除了匹配之外,Python 还能进行搜索,它会一直向后寻找直到找到匹配字符串:
>>> pattern = re.compile('bar') >>> pattern.search('foobar') <_sre.SRE_Match object at 0x103c9a578> >>> _.start() 3
一个常见的问题是,如果没有匹配的字符串,会对 Python 造成很大的负担。思考下实现一个类似百科语言的分词器(比如说 markdown)。在表示格式的标识符之间,有很长的文字也需要处理。所以匹配标识符之间时,一直在寻找是否有别的标识符也需要处理。如何跳过这一过程呢?
一种方法是编译一些正则表达式,放在一个列表中,再逐一检查。如果一个都不匹配则跳过一个字符:
rules = [ ('bold', re.compile(r'\*\*')), ('link', re.compile(r'\[\[(.*?)\]\]')), ] def tokenize(string): pos = 0 last_end = 0 while 1: if pos >= len(string): break for tok, rule in rules: match = rule.match(string, pos) if match is not None: start, end = match.span() if start > last_end: yield 'text', string[last_end:start] yield tok, match.group() last_end = pos = match.end() break else: pos += 1 if last_end < len(string): yield 'text', string[last_end:]
这不是一个优雅的解决方案,也不是很快速。不匹配的字符串越多,过程就越慢,因为每次只前进一个字符,这个循环是在 Python 解释器里的,处理过程也相当不灵活。对每个标识符我们只得到了匹配的字符串,如果需要加入分组就要进行一点扩展。
有没有更好的方法呢?有没有可能我们能告诉正则表达式引擎,我希望它只扫描若干正则式中的任意一个?
事情开始变得有趣了,这就是我们用子模式(a|b)时本质上在做的事。引擎会搜索a和b其中之一。这样我们就能用已有的正则表达式构造一个巨大的表达式,然后再用它去匹配。这样不好的地方在于所有分组都加入进来以后非常容易把人搞晕。
有意思的来了,在过去的 15 年中,正则表达式中一直存在一个没有文档的功能:Scanner。scanner 是内置的 SRE 模式对象的一个属性,引擎通过扫描器,在找到一个匹配后继续找下一个。甚至还有一个re.Scanner类(也没有文档),它基于 SRE 模式 scanner 构造,提供了一些更高一层的接口。
re模块中的 scanner 对于提升「不匹配」的速度并没有多少帮助,但阅读它的源码能告诉我们它是如何实现的:基于 SRE 的基础类型。
它的工作方式是接受一个正则表达式的列表和一个回调元组。对于每个匹配调用回调函数然后以此构造一个结果列表。具体实现上,它手动创建了 SRE 的模式和子模式对象(大概地说,它构造了一个更大的正则表达式,且不需要解析它)。有了这个知识,我们就能进行以下扩展:
from sre_parse import Pattern, SubPattern, parse from sre_compile import compile as sre_compile from sre_constants import BRANCH, SUBPATTERN class Scanner(object): def __init__(self, rules, flags=0): pattern = Pattern() pattern.flags = flags pattern.groups = len(rules) + 1 self.rules = [name for name, _ in rules] self._scanner = sre_compile(SubPattern(pattern, [ (BRANCH, (None, [SubPattern(pattern, [ (SUBPATTERN, (group, parse(regex, flags, pattern))), ]) for group, (_, regex) in enumerate(rules, 1)])) ])).scanner def scan(self, string, skip=False): sc = self._scanner(string) match = None for match in iter(sc.search if skip else sc.match, None): yield self.rules[match.lastindex - 1], match if not skip and not match or match.end() < len(string): raise EOFError(match.end())
如何使用呢?像下面这样:
scanner = Scanner([ ('whitespace', r'\s+'), ('plus', r'\+'), ('minus', r'\-'), ('mult', r'\*'), ('div', r'/'), ('num', r'\d+'), ('paren_open', r'\('), ('paren_close', r'\)'), ]) for token, match in scanner.scan('(1 + 2) * 3'): print (token, match.group())
在上面的代码中,当不能解析一段字符时,将会抛出EOFError,但如果你加入skip=True,则不能解析的部分将会被跳过,这对于实现像百科解析器的东西来说非常完美。
我们在跳过时可以使用match.start()和match.end()来查看哪一部分被跳过了。所以第一个例子可以改为如下:
scanner = Scanner([ ('bold', r'\*\*'), ('link', r'\[\[(.*?)\]\]'), ]) def tokenize(string): pos = 0 for rule, match in self.scan(string, skip=True): hole = string[pos:match.start()] if hole: yield 'text', hole yield rule, match.group() pos = match.end() hole = string[pos:] if hole: yield 'text', hole
还有一个很烦人的问题:分组的序号不是基于原来的正则表达式而是基于组合之后的。这会导致如果你有一个(a|b)的规则,用序号来引用这个分组会得到错误的结果。我们需要一些额外的工作,在 SRE 的匹配对象上包装一个类,改变它的序号和分组名。如果你对这个感兴趣我已经在一个github 仓库中基于以上方案实现了一个更加复杂的版本,包括了一个匹配包装类和一些例子来告诉你怎么用。 |
BERT large model (uncased) for Sentence Embeddings in Russian language.
The model is described in this article
For better quality, use mean token embeddings.
Usage (HuggingFace Models Repository)
You can use the model directly from the model repository to compute sentence embeddings:
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
#Sentences we want sentence embeddings for
sentences = ['Привет! Как твои дела?',
'А правда, что 42 твое любимое число?']
#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
model = AutoModel.from_pretrained("sberbank-ai/sbert_large_nlu_ru")
#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=24, return_tensors='pt')
#Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
#Perform pooling. In this case, mean pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs . |
Description
Given a non-negative integer numRows, generate the first numRows of Pascal’s triangle.
In Pascal’s triangle, each number is the sum of the two numbers directly above it.
Example:
Input:5Output:[ [1], [1,1], [1,2,1], [1,3,3,1], [1,4,6,4,1] ]
Explanation
use a dynamic programming approach to constructing Pascal’s triangle each row based on the previous row.
Python Solution
class Solution:
def generate(self, numRows: int) -> List[List[int]]:
result = []
if numRows < 1:
return result
for i in range(0, numRows):
row = []
if i == 0:
row.append(1)
else:
row.insert(0, 1)
row.insert(i, 1)
for j in range(1, i):
left_above = result[i - 1][j - 1]
right_above = result[i - 1][j]
row.insert(j, left_above + right_above)
result.append(row)
return result
Time complexity: O(numRows^2).
Space complexity: O(numRows^2). |
Description
Given a binary tree, you need to compute the length of the diameter of the tree. The diameter of a binary tree is the length of the longest path between any two nodes in a tree. This path may or may not pass through the root.
Example:
Given a binary tree
1 / \ 2 3 / \ 4 5
Return 3, which is the length of the path [4,2,1,3] or [5,2,1,3].
Note: The length of path between two nodes is represented by the number of edges between them.
Explanation
the pass of two nodes are the sum of maximum heights of left, right subtrees.
Python Solution
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def diameterOfBinaryTree(self, root: TreeNode) -> int:
if root == None:
return 0
left_max = self.max_height(root.left)
right_max = self.max_height(root.right)
return max(self.diameterOfBinaryTree(root.left), self.diameterOfBinaryTree(root.right), left_max + right_max)
def max_height(self, root):
if root == None:
return 0
left = self.max_height(root.left)
right = self.max_height(root.right)
return max(left, right) + 1
Time complexity: O(N).
Space complexity: O(N). |
Make a Full Lexer in Python!
What is a Lexer?
A lexer is an analyzer that moves through your code looking at each character, and trying to create tokens out of them
This input
int a =5*5
Can be turned into
[(âKeyWordâ, âintâ), (âIDâ, âaâ), (âassignâ, â=â), (ânumâ, 5), (âOPâ, â*â), (ânumâ, 5)]
by the Lexer you will learn how to create.
What if I Have Problems?
If you have trouble understanding something or you get errors, tell me and Iâll try my best to tell you whatâs wrong.
Letâs Get Started!
First, open a new python repl with whatever name you choose.
Then create a function lex. This will be our function that basically does everything :).
Then, make a variable code set to input(). Make sure the code initialization is not in the function.
And call lex on code.
def lex(line):
code = input()
lex(code)
After this set a new variable, in lex, and name it count or lexeme_count or something, set it to 0.
def lex(line):
lexeme_count = 0
code = input()
lex(code)
The lexeme_count variable is going to keep track of the chars you have already scanned.
Once you have that code, add a while loop saying that if chars you have scanned is less than length of line, keep scanning.
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme_count += 1
code = input()
lex(code)
We will then make it more powerful by knowing what each lexeme is.
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
lexeme_count += 1
code = input()
lex(code)
Then we can tell what the type is by using an if-elif-else statement to check the type of lexeme.
Make sure to move the lexeme_count += 1 part into the else.
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
if lexeme.isdigit():
elif lexeme.isalpha():
else:
lexeme_count += 1
code = input()
lex(code)
Letâs fill in the blank conditional blocks.
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
if lexeme.isdigit():
typ, tok, consumed = lex_num(line[lexeme_count:])
lexeme_count += consumed
elif lexeme.isalpha():
typ, tok, consumed = lex_str(line[lexeme_count:])
lexeme_count += consumed
else:
lexeme_count += 1
code = input()
lex(code)
Whoa, Slow Down! Whatâs Going on?
What weâre doing here, is we are making three variables. One for the type of each token, one for the token itself, and one for the amount of lexemes âconsumedâ, âeatenâ, or âscannedâ
Then we are assigning those variables to a function call which takes the rest of the line, and gets the rest of the token. We do this both for digits and strings
After this, we change the lexeme_count by the amount of chars consumed so they keep up with each other.
Is this it?
This is certainly not the full lexical analyzer, so letâs add some identifier lexing!
Once we have finished with that, we can scan for literals, conditionals, operators, keywords, etc
Letâs Lex Some Identifiers!
Add another elif to the if-elif-else statement this will check if lexeme is equal to a a letter of the alphabet.
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
if lexeme.isdigit():
typ, tok, consumed = lex_num(line[lexeme_count:])
lexeme_count += consumed
elif lexeme == ‘“‘ or lexeme == “‘“:
typ, tok, consumed = lex_str(line[lexeme_count:])
lexeme_count += consumed
elif lexeme.isalpha():
else:
lexeme_count += 1
code = input()
lex(code)
In this elif, we need to mirror what we did earlier, but with a call to a different function; lex_id().
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
if lexeme.isdigit():
typ, tok, consumed = lex_num(line[lexeme_count:])
lexeme_count += consumed
elif lexeme == ‘“‘ or lexeme == “‘“:
typ, tok, consumed = lex_str(line[lexeme_count:])
lexeme_count += consumed
elif lexeme.isalpha():
typ, tok, consumed = lex_id(line[lexeme_count])
lexeme_count += consumed
else:
lexeme_count += 1
code = input()
lex(code)
Time To Make The functions!
We used three functions, but we havenât defined them. Letâs go ahead and do that.
def lex_num(line):
num= “â€
def lex_str(line):
delimiter = line[0]
string = “â€
def lex_id(line):
id = “â€
First weâll make the lex_num function go till the end of the line and return the number.
def lex_num(line):
num= “â€
for c in line:
if not c.isdigit():
break
return ‘num’, int(num), len(num)
def lex_str(line):
delimiter = line[0]
string = “â€
def lex_id(line):
id = “â€
We will then fill out the lex_str() function doing the same thing as the digit one but for a string instead.
def lex_num(line):
num= “â€
for c in line:
if not c.isdigit():
break
return ‘num’, int(num), len(num)
def lex_str(line):
delimiter = line[0]
string = “â€
for c in line:
string += c
return ‘str’, string, len(string)
def lex_id(line):
id = “â€
And now we will fill out the lex_id() function!
def lex_num(line):
num= “â€
for c in line:
if not c.isdigit():
break
return ‘num’, int(num), len(num)
def lex_str(line):
delimiter = line[0]
string = “â€
for c in line:
if c==delimiter:
break
string += c
return ‘str’, string, len(string)
def lex_id(line):
id = “â€
for c in line
if not c.isdigit() and not c.isalpha and c != “_â€:
break
id += c
return ‘ID’, id, len(id)
What About KeyWords?
Yes, we will need to change the lex_id() function to know about key words...
What are you waiting for, read on!
We are going to make a list of keywords and check the id.
def lex_num(line):
num= “â€
for c in line:
if not c.isdigit():
break
return ‘num’, int(num), len(num)
def lex_str(line):
delimiter = line[0]
string = “â€
for c in line:
if c==delimiter:
break
string += c
return ‘str’, string, len(string)
def lex_id(line):
keys = [‘print’, ‘var’, ‘while’, ‘if’, ‘elif’, ‘else’]
id = “â€
for c in line
if not c.isdigit() and not c.isalpha and c != “_â€:
break
id += c
if id in keys:
return ‘key’, id, len(id)
else:
return ‘ID’, id, len(id)
The Entire Code
I know you want to go out and try this, but if you need it, here is the full working code.
BTW if you copy and paste this code, it will result in an error because I use curly quotes and those are not used in programming âââââââââââ, I guess you either have to manually take them out and replace them XD, or just look at this code as a reference.
If you want to copy and paste :(, do it below on my better lexer
def lex_num(line):
num= “â€
for c in line:
if not c.isdigit():
break
return ‘num’, int(num), len(num)
def lex_str(line):
delimiter = line[0]
string = “â€
for c in line:
if c==delimiter:
break
string += c
return ‘str’, string, len(string)
def lex_id(line):
keys = [‘print’, ‘var’, ‘while’, ‘if’, ‘elif’, ‘else’]
id = “â€
for c in line
if not c.isdigit() and not c.isalpha and c != “_â€:
break
id += c
if id in keys:
return ‘key’, id, len(id)
else:
return ‘ID’, id, len(id)
def lex(line):
lexeme_count = 0
while lexeme_count < len(line):
lexeme = line[lexeme_count]
if lexeme.isdigit():
typ, tok, consumed = lex_num(line[lexeme_count:])
lexeme_count += consumed
elif lexeme == ‘“‘ or lexeme == “‘“:
typ, tok, consumed = lex_str(line[lexeme_count:])
lexeme_count += consumed
elif lexeme.isalpha():
typ, tok, consumed = lex_id(line[lexeme_count])
lexeme_count += consumed
else:
lexeme_count += 1
code = input()
lex(code)
Nice! But I think that you should start with making tokens part first, for example, building a Token class that contains informations about the token(like column, lexeme, line and type), then start with the lexer. It will make everything more convenient since you just need to create a Token instance, eat informations for it, and return it to the user. It follows the DRY(Don't Repeat Yourself) rule very well! But anyway, nice one! |
# Lutron Binding
This binding integrates with Lutron (opens new window) lighting control and home automation systems. It contains support for four different types of Lutron systems via different bridge things:
RadioRA 2, HomeWorks QS, Caseta, RA2 Select, and other current systems that can be controlled via Lutron Integration Protocol (LIP) or LEAP
The original RadioRA system, referred to here as RadioRA Classic
Legacy HomeWorks RS232 Processors
Grafik Eye 3x/4x systems with GRX-PRG or GRX-CI-PRG control interfaces
Each is described in a separate section below.
# Lutron RadioRA 2/HomeWorks QS/RA2 Select/Caseta Binding
Note: While the Lutron Integration Protocol used by ipbridge in this binding should largely be compatible with other current Lutron systems, it has only been fully tested with RadioRA 2, HomeWorks QS, and Caseta with Smart Bridge Pro.Homeworks QS support is still a work in progress, since not all features/devices are supported yet.RA2 Select systems work with the binding, but full support for all devices still needs to be confirmed.Caseta Smart Bridge (non-Pro model) support and support for Caseta occupancy sensors is available only through the experimental leapbridge thing.The binding has not been tested with Quantum, QS Standalone, myRoom Plus, or Athena systems.
# Supported Things
This binding currently supports the following thing types:
ipbridge- The Lutron main repeater/processor/hub
leapbridge- Experimental bridge that uses LEAP protocol (Caseta & RA2 Select only)
dimmer- Light dimmer
switch- Switch or relay module
fan- Fan controller
occupancysensor- Occupancy/vacancy sensor
ogroup- Occupancy group
keypad- Lutron seeTouch or Hybrid seeTouch Keypad
ttkeypad- Tabletop seeTouch Keypad
intlkeypad- International seeTouch Keypad (HomeWorks QS only)
palladiomkeypad- Palladiom Keypad (HomeWorks QS only)
pico- Pico Keypad
grafikeyekeypad- GRAFIK Eye QS Keypad (RadioRA 2/HomeWorks QS only)
virtualkeypad- Repeater/processor integration buttons or Caseta Smart Bridge scene buttons
vcrx- Visor control receiver module (VCRX)
qsio- QS IO Interface (HomeWorks QS only)
wci- QS Wallbox Closure Interface (WCI) (HomeWorks QS only)
cco- Contact closure output module or VCRX CCO
shade- Lutron shade, motorized drape, or motor controller
blind- Lutron venetian blind or horizontal sheer blind [Experimental]
greenmode- Green Mode subsystem
timeclock- Scheduling subsystem
sysvar- System state variable (HomeWorks QS only) [Experimental]
# Discovery
Full discovery is supported for RadioRA 2 and HomeWorks QS systems. Both the main repeaters/processors themselves and the devices connected to them will be automatically discovered. Discovered repeaters/processors will be accessed using the default integration credentials. These can be changed in the bridge thing configuration. Discovered keypad devices should have their model parameters automatically set to the correct value.
Caseta Smart Bridge hubs, Smart Bridge Pro 2 hubs, and RA2 Select main repeaters should be discovered automatically via mDNS. Devices attached to them need to be configured manually when using ipbridge. The experimental leapbridge supports full automated discovery of these systems, but authentication information must be manually entered.
Other supported Lutron systems must be configured manually.
Note: Discovery selects ipbridge for HomeWorks QS, RadioRA 2, RA2 Select, and Caseta Smart Bridge Pro.It select leapbridge for Caseta Smart Bridge, since only LEAP protocol is supported by this system.
# Binding Configuration
This binding does not require any special configuration.
# Thing Configuration and Usage
Each Lutron thing requires the integration ID of the corresponding item in the Lutron system. The integration IDs can be retrieved from the integration report generated by the Lutron software. If a thing will not come online, but instead has the status "UNKNOWN: Awaiting initial response", it is likely that you have configured the wrong integration ID for it.
# Bridges
Two different bridges are now supported by the binding for current Lutron systems, ipbridge and leapbridge. The LIP protocol is supported by ipbridge while the LEAP protocol is supported by leapbridge. Current systems support one or both protocols as shown below.
Bridge Device LIP LEAP
HomeWorks QS Processor X
RadioRA 2 Main Repeater X
RA2 Select Main Repeater X X
Caseta Smart Bridge Pro X X
Caseta Smart Bridge X
If your system supports only one protocol, then the choice of bridge is easy. If you have a system that supports both protocols, you must decide which you wish to use.
You should be aware of the following functional differences between the protocols:
Using LIP on Caseta you can’t receive notifications of occupancy group status changes (occupied/unoccupied/unknown), but using LEAP you can.
Conversely, LIP provides notifications of keypad key presses, while LEAP does not (as far as is currently known). This means that using ipbridge you can trigger rules and take actions on keypad key presses/releases, but using leapbridge you can’t.
Caseta and RA2 Select device discovery is supported via LEAP, but not via LIP.
The leapbridge is a bit more complicated to configure because LEAP uses an SSL connections and authenticates using certificates.
LIP is a publicly documented protocol, while LEAP is not. This means that Lutron could make a change that breaks LEAP support at any time.
It is possible to run leapbridge and ipbridge at the same time, for the same bridge device, but each managed device (e.g. keypad or dimmer) should only be configured through one bridge.Remember that LEAP device IDs and LIP integration IDs are not necessarily equal!
# ipbridge
This is the standard bridge which should be used with most Lutron systems. It relies on Lutron Integration Protocol (LIP) over TCP/IP to communicate with the target device. It can currently be used with a RadioRA 2 main repeater, a HomeWorks QS Processor, a Caseta Smart Bridge Pro, or a RA2 Select main repeater.
The ipbridge configuration requires the IP address of the bridge as well as the telnet username and password to log in to the bridge. It is recommended that main repeaters/processors be configured with static IP addresses. However if automatic discovery is used, the bridge thing will work with DHCP-configured addresses.
The optional advanced parameter heartbeat can be used to set the interval between connection keepalive heartbeat messages, in minutes.It defaults to 5.Note that the handler will wait up to 30 seconds for a heartbeat response before attempting to reconnect.The optional advanced parameter reconnect can be used to set the connection retry interval, in minutes.It also defaults to 5.
The optional advanced parameter delay can be used to set a delay (in milliseconds) between transmission of integration commands to the bridge device.This may be used for command send rate throttling.It can be set to an integer value between 0 and 250 ms, and defaults to 0 (no delay).It is recommended that this parameter be left at the default unless you experience problems with sent commands being dropped/ignored.This has been reported in some rare cases when large numbers of commands were sent in short periods to Caseta hubs.If you experience this problem, try setting a delay value of around 100 ms as a starting point.
The optional advanced parameter discoveryFile can be set to force the device discovery service to read the Lutron configuration XML from a local file rather than retrieving it via HTTP from the RadioRA 2 or HomeWorks QS bridge device.This is useful in the case of some older Lutron software versions, where the discovery service may have problems retrieving the file from the bridge device.Note that the user which openHAB runs under must have permission to read the file.
Thing configuration file example:
Bridge lutron:ipbridge:radiora2 [ ipAddress="192.168.1.2", user="lutron", password="integration" ] {
Thing ...
Thing ...
}
# leapbridge [experimental]
The leapbridge is an experimental bridge which allows the binding to work with the Caseta Smart Hub (non-Pro version). It can also be used to provide additional features, such as support for occupancy groups and device discovery, when used with Caseta Smart Hub Pro or RA2 Select. It uses the LEAP protocol over SSL, which is an undocumented protocol supported by some of Lutron's newer systems. Note that the LEAP protocol will not notify the bridge of keypad key presses. If you need this useful feature, you should use ipbridge instead. You can use both ipbridge and leapbridge at the same time, but each device should only be configured through one bridge. You should also be aware that LEAP and LIP integration IDs for the same device can be different.
For instructions on configuring authentication for leapbridge, see the Leap Notes document.
The ipAddress, keystore and keystorePassword parameters must be set.The optional port parameter defaults to 8081 and should not normally need to be changed.
The optional parameter certValidate defaults to true. It should be set to false only if validation of the hub's server certificate is failing, possibly because the hostname you are using for it does not match its internal hostname.If this happens, the leapbridge status will be: "OFFLINE - COMMUNICATION_ERROR - Error opening SSL connection", and a message like the following may be logged:Error opening SSL connection: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.
The optional advanced parameter heartbeat can be used to set the interval between connection keepalive heartbeat messages, in minutes.It defaults to 5.Note that the handler will wait up to 30 seconds for a heartbeat response before attempting to reconnect.The optional advanced parameter reconnect can be used to set the connection retry interval, in minutes.It also defaults to 5.The optional advanced parameter delay can be used to set a delay (in milliseconds) between transmission of LEAP commands to the bridge device.It should not normally need to be changed.
Thing configuration file example:
Bridge lutron:leapbridge:caseta [ ipAddress="192.168.1.3", keystore="/home/openhab/lutron.keystore", keystorePassword="secret" ] {
Thing ...
Thing ...
}
# Devices
# Dimmers
Dimmers can optionally be configured to specify a default fade in and fade out time in seconds using the fadeInTime and fadeOutTime parameters.These are used for ON and OFF commands, respectively, and default to 1 second if not set.Commands using a specific percent value will use a default fade time of 0.25 seconds.
Dimmers also support the optional advanced parameters onLevel and onToLast.The onLevel parameter specifies the level to which the dimmer will go when sent an ON command.It defaults to 100.The onToLast parameter is a boolean that defaults to false.If set to "true", the dimmer will go to its last non-zero level when sent an ON command.If the last non-zero level cannot be determined, the value of onLevel will be used instead.
A dimmer thing has a single channel lightlevel with type Dimmer and category DimmableLight.The dimmer thing was previously also used to control fan speed controllers, but now you should use the fan thing instead.
Thing configuration file example:
Thing dimmer livingroom [ integrationId=8, fadeInTime=0.5, fadeOutTime=5 ]
The dimmer thing supports the thing action setLevel(Double level, Double fadeTime, Double delayTime) for automation rules.
The parameters are:
levelThe new light level to set (0-100)
fadeTimeThe time in seconds over which the dimmer should fade to the new level
delayTimeThe time in seconds to delay before starting to fade to the new level
The fadeTime and delayTime parameters are significant to 2 digits after the decimal point (i.e. to hundredths of a second), but some Lutron systems may round the time to the nearest 0.25 seconds when processing the command. Times of 100 seconds or more will be rounded to the nearest integer value.
See below for an example rule using thing actions.
# Switches
Switches take no additional parameters besides integrationId.A switch thing has a single channel switchstatus with type Switch and category Switch.
Thing configuration file example:
Thing switch porch [ integrationId=8 ]
# Fans
Fan speed controllers are interfaced with using the fan thing.It accepts no additional parameters besides integrationId.A fan thing has two channels, fanspeed and fanlevel.
Thing configuration file example:
Thing fan porchfan [ integrationId=12 ]
# Occupancy Sensors
An occupancysensor thing interfaces to Lutron Radio Powr Savr wireless occupancy/vacancy sensors on RadioRA 2 and HomeWorks QS systems.On these systems, you should generally choose to interface to either an occupancy group or individual occupancy sensors for a given area.For Caseta Smart Motion Sensors, you must use the group thing instead.
It accepts no configuration parameters other than integrationId.
The binding creates one occupancystatus channel, Item type Switch, category Motion.It is read-only, and ignores all commands.The channel state can be monitored for occupied (ON) or unoccupied (OFF) events coming from the sensor.The sensors cannot be queried for their state, so initial channel state at startup will be undefined (NULL).
Thing configuration file example:
Thing occupancysensor shopsensor [ integrationId=7 ]
# Occupancy Groups
A ogroup thing interfaces to an occupancy group, which shows occcupancy/vacancy status for an area or room with one or more occupancy sensors.On RadioRA2 and HomeWorks QS systems, you should generally choose to interface to either an occupancy group or individual occupancy sensors for a given area.On Caseta systems, you cannot interface to individual sensors and must use the ogroup thing.The integrationId parameter must be set to the occupancy group ID.
The binding creates one read-only groupstate channel, item type String, category Motion.The value can be "OCCUPIED", "UNOCCUPIED", or "UNKNOWN".
Thing configuration file example:
Thing ogroup lrgroup [ integrationId=7 ]
# seeTouch and Hybrid seeTouch Keypads
seeTouch and Hybrid seeTouch keypads are interfaced with using the keypad thing.In addition to the usual integrationId parameter, it accepts model and autorelease parameters.The model parameter should be set to the Lutron keypad model number.This will cause the handler to create only the appropriate channels for that particular keypad model.The default is "Generic", which will cause the handler to create all possible channels, some of which will likely not be appropriate for your model.
The autorelease parameter is a boolean.Setting it to true will cause each button channel state to transition back to OFF (released) automatically after a going to ON when a button is pressed.Normally, a Lutron keypad will send a "pressed" event when a button is pressed, and a "released" event when it is released.The handler will set the button channel state to ON when it receives the "pressed" event, and "off" when it receives the "released" event.This allows you to take actions on both state changes.However, some integration applications such as Lutron Home+ only cause a "pressed" event to be generated when remotely "pressing" a button.A "release" is never sent, therefore the button channel would become "stuck" in the ON state.To prevent this the autorelease parameter defaults to true.If you do not use integration applications that exhibit this sort of anti-social behavior and you wish to trigger rules on both button press and release, you should set autorelease to false.
The autorelease parameter also effects behavior when sending an ON command to a button channel to trigger a remote button press.If autorelease is set, the handler will send action "release" to the device component immediately after sending action "press".When the controller responds, the channel state will be transitioned back to OFF.
A channel button[nn] with item type Switch and category Switch is created for each button, and a channel led[nn] with item type Switch and category Light is created for each button indicator LED.You can monitor button channels for ON and OFF state changes to indicate button presses and releases, and send ON and OFF commands to remotely press and release buttons.Ditto for the indicator LED channels.Note, however, that version 11.6 or higher of the RadioRA 2 software may be required in order to drive keypad LED states, and then this may only be done on unbound buttons.
Component numbering: For button and LED layouts and numbering, see the Lutron Integration Protocol Guide (rev. AA) p.104 (https://www.lutron.com/TechnicalDocumentLibrary/040249.pdf (opens new window)).If you are having problems determining which channels have been created for a given keypad model, select the appropriate keypad thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:keypad:radiora2:entrykeypad) from the openHAB CLI to list the channels.
Supported settings for model parameter: H1RLD, H2RLD, H3BSRL, H3S, H4S, H5BRL, H6BRL, HN1RLD, HN2RLD, HN3S, HN3BSRL, HN4S, HN5BRL, HN6BRL, W1RLD, W2RLD, W3BD, W3BRL, W3BSRL, W3S, W4S, W5BRL, W5BRLIR, W6BRL, W7B, Generic (default)
Thing configuration file example:
Thing keypad entrykeypad [ integrationId=10, model="W7B" autorelease=true ]
Example rule triggered by a keypad button press:
rule ExampleScene
when
Item entrykeypad_button4 received update ON
then
Library1_Brightness.sendCommand(OFF)
end
# Tabletop seeTouch Keypads
Tabletop seeTouch keypads use the ttkeypad thing.It accepts the same integrationId, model, and autorelease parameters and creates the same channel types as the keypad thing.See the keypad section above for a full discussion of configuration and use.
Component numbering: For button and LED layouts and numbering, see the Lutron Integration Protocol Guide (rev. AA) p.110 (https://www.lutron.com/TechnicalDocumentLibrary/040249.pdf (opens new window)).If you are having problems determining which channels have been created for a given keypad model, select the appropriate ttkeypad thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:ttkeypad:radiora2:bedroomkeypad) from the openHAB CLI to list the channels.
Supported settings for model parameter: T5RL, T10RL, T15RL, T5CRL, T10CRL, T15CRL, Generic (default)
Thing configuration file example:
Thing ttkeypad bedroomkeypad [ integrationId=11, model="T10RL" autorelease=true ]
# International seeTouch Keypads (HomeWorks QS)
International seeTouch keypads used in the HomeWorks QS system use the intlkeypad thing.It accepts the same integrationId, model, and autorelease parameters and creates the same button and LED channel types as the keypad thing.See the keypad section above for a full discussion of configuration and use.
To support this keypad's contact closure inputs, CCI channels named cci1 and cci2 are created with item type Contact and category Switch.They are marked as Advanced, so you will need to check "Show advanced" in order to see them listed in the Administration UI.They present ON/OFF states the same as a keypad button.
Component numbering: For button and LED layouts and numbering, see the Lutron Integration Protocol Guide (rev. AA) p.107 (https://www.lutron.com/TechnicalDocumentLibrary/040249.pdf (opens new window)).If you are having problems determining which channels have been created for a given keypad model, select the appropriate intlkeypad thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:intlkeypad:hwprocessor:kitchenkeypad) from the openHAB CLI to list the channels.
Supported settings for model parameter: 2B, 3B, 4B, 5BRL, 6BRL, 7BRL, 8BRL, 10BRL / Generic (default)
Thing configuration file example:
Thing intlkeypad kitchenkeypad [ integrationId=15, model="10BRL" autorelease=true ]
# Palladiom Keypads (HomeWorks QS)
Palladiom keypads used in the HomeWorks QS system use the palladiomkeypad thing.It accepts the same integrationId, model, and autorelease parameters and creates the same button and LED channel types as the keypad thing.See the keypad section above for a full discussion of configuration and use.
Component numbering: For button and LED layouts and numbering, see the Lutron Integration Protocol Guide (rev. AA) p.95 (https://www.lutron.com/TechnicalDocumentLibrary/040249.pdf (opens new window)).If you are having problems determining which channels have been created for a given keypad model, select the appropriate palladiomkeypad thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:palladiomkeypad:hwprocessor:kitchenkeypad) from the openHAB CLI to list the channels.
Supported settings for model parameter: 2W, 3W, 4W, RW, 22W, 24W, 42W, 44W, 2RW, 4RW, RRW
Thing configuration file example:
Thing palladiomkeypad kitchenkeypad [ integrationId=16, model="4W" autorelease=true ]
# Pico Keypads
Pico keypads use the pico thing.It accepts the same integrationId, model, and autorelease parameters and creates the same channel types as the keypad and ttkeypad things.The only difference is that no LED channels will be created, since Pico keypads have no indicator LEDs.See the discussion above for a full discussion of configuration and use.
Component numbering: For button layouts and numbering, see the Lutron Integration Protocol Guide (rev. AA) p.113 (https://www.lutron.com/TechnicalDocumentLibrary/040249.pdf (opens new window)).If you are having problems determining which channels have been created for a given keypad model, select the appropriate pico thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:pico:radiora2:hallpico) from the openHAB CLI to list the channels.
Supported settings for model parameter: 2B, 2BRL, 3B, 3BRL, 4B, Generic (default)
Thing configuration file example:
Thing pico hallpico [ integrationId=12, model="3BRL", autorelease=true ]
# GRAFIK Eye QS Keypads (in RadioRA 2/HomeWorks QS systems)
GRAFIK Eye devices can contain up to 6 lighting dimmers, a scene controller, a time clock, and a front panel with a column of 5 programmable scene buttons and 0 to 3 columns of programmable shade or lighting control buttons.They can be used as peripheral devices in a RadioRA 2 or HomeWorks QS system, or can be used as stand-alone controllers that themselves can control other Lutron devices.The grafikeyekeypad thing is used to interface to the GRAFIK Eye QS front panel keypad when it is used in a RadioRA 2 or HomeWorks QS system.In this configuration, the integrated dimmers will appear to openHAB as separate output devices.
If your GRAFIK Eye is being used as a stand-alone device and is not integrated in to a RadioRA 2 or HomeWorks QS system, then this is not the thing you are looking for.You should instead be using the grafikeye thing (see below).
The grafikeyekeypad thing accepts the same integrationId, model, and autorelease parameters and creates the same button, LED, and CCI, channel types as the other keypad things (see above).The model parameter should be set to indicate whether there are zero, one, two, or three columns of buttons on the left side of the panel.Note that this count does not include the column of 5 scene buttons always found on the right side of the panel.
To support the GRAFIK Eye's contact closure input, a CCI channel named cci1 will be created with item type Contact and category Switch.It is marked as Advanced, so you will need to check "Show advanced" in order to see it listed in the Administration UI.It presents ON/OFF states the same as a keypad button.
Component numbering: The buttons and LEDs on the GRAFIK Eye are numbered top to bottom, starting with the 5 scene buttons in a column on the right side of the panel, and then proceeding with the columns of buttons (if any) on the left side of the panel, working left to right.If you are having problems determining which channels have been created for a given model setting, select the appropriate grafikeyekeypad thing under Settings/Things in the Administration UI and click on the Channels tab.You can also run the command things show <thingUID> (e.g. things show lutron:grafikeyekeypad:radiora2:theaterkeypad) from the openHAB CLI to list the channels.
Supported settings for model parameter: 0COL, 1COL, 2COL, 3COL (default)
Thing configuration file example:
Thing lutron:grafikeyekeypad:theaterkeypad (lutron:ipbridge:radiora2) [ integrationId=12, model="3COL", autorelease="true" ]
# Virtual Keypads
The virtualkeypad thing is used to interface to the virtual buttons on the RadioRA 2 main repeater or HomeWorks processor.These are sometimes referred to in the Lutron documentation as phantom buttons or integration buttons, and are used only for integration.There are 100 of these virtual buttons, and 100 corresponding virtual indicator LEDs.
The virtualkeypad thing can also be used to interface to the Smart Bridge scene buttons on Caseta systems.This allows you to trigger your defined scenes via the virtual keypad buttons.For this to work, the optional model parameter must be set to Caseta.When used with Caseta, no virtual indicator LED channels are created.
The behavior of this binding is the same as the other keypad bindings, with the exception that the button and LED channels created have the Advanced flag set. This means, among other things, that you will need to check "Show advanced" in order to see them listed in the Administration UI.
In most cases the integrationId parameter should be set to 1.
Supported settings for model parameter: Caseta, Other (default)
Thing configuration file example:
Thing virtualkeypad repeaterbuttons [ integrationId=1, autorelease=true ]
# VCRX Modules
The Lutron VCRX appears to openHAB as multiple devices.The 6 buttons (which can be activated remotely by HomeLink remote controls), 6 corresponding LEDs, and 4 contact closure inputs (CCIs) are handled by the vcrx thing, which behaves like a keypad.The contact closure outputs (CCOs) have their own integration IDs and are handled by the cco thing (see below).
Supported options are integrationId and autorelease.Supplying a model is not required, as there is only one model.
To support the contact closure inputs, CCI channels named cci[n] are created with item type Contact and category Switch.The VCRX security (Full/Flash) input controls both the cci1 and cci2 channels, while input connections 1 and 2 map to the cci3 and cci4 channels respectively.The cci channels are marked as Advanced, so you will need to check "Show advanced" in order to see them listed in the Administration UI.They present OPEN/CLOSED states but do not accept commands since Contact items are read-only in openHAB.Note that the autorelease option does not apply to CCI channels.
Thing configuration file example:
Thing vcrx vcrx1 [ integrationId=13, autorelease=true ]
# QS IO Interface (HomeWorks QS)
The Lutron QS IO Interface (QSE-IO) appears to openHAB as multiple devices.The 5 contact closure inputs (CCIs) are handled by the qsio thing.The 5 contact closure outputs (CCOs) are handled by the cco thing (see below).The only configuration option is integrationId
To support the contact closure inputs, CCI channels named cci[n] are created with item type Contact and category Switch.They are marked as Advanced, so you will need to check "Show advanced" in order to see them listed in the Administration UI.They present OPEN/CLOSED states but do not accept commands as Contact items are read-only in openHAB.
Some functionality may depend on QSE-IO DIP switch settings. See the Lutron documentation for more information.
Thing configuration file example:
Thing qsio sensorinputs [ integrationId=42 ]
# QS Wallbox Closure Interface (WCI) (HomeWorks QS only)
The Lutron Wallbox Closure Interface (QSE-CI-WCI) is used to interface to contact closure keypads.It is handled by the wci thing.The 8 button inputs appear to the HomeWorks system as normal keypad buttons.There are also 8 LEDs, although they are normally hidden and thus mainly useful for setup and diagnostics.
Supported options are integrationId and autorelease.Supplying a model is not required, as there is only one model.
See the Lutron documentation for more information.
Thing configuration file example:
Thing wci specialkeypad [ integrationId=48, autorelease=true ]
# CCO Modules
Contact closure output (cco) things accept outputType and pulseLength parameters.The outputType parameter is a string that should be set to "Pulsed" for pulsed CCOs or "Maintained" for non-pulsed CCOs.The default is "Pulsed", since this is generally the safer wrong setting.The pulseLength parameter sets the pulse length in seconds for a pulsed output.It can range from 0.25 to 99.0 seconds and defaults to 0.5. It is ignored if outputType="Maintained".Be aware that the Lutron controller may round the pulse length down to the nearest 0.25 seconds.
Note: The ccopulsed and ccomaintained things are now deprecated.You should use the cco thing with the appropriate outputType setting instead.
Each cco thing creates one switch channel called switchstatus.For pulsed CCOs, sending an ON command will close the output for the configured pulse time.Sending an OFF command does nothing.Because of limitations in RadioRA 2, you cannot monitor the state of a pulsed CCO.Therefore, the channel state will only transition OFF->ON->OFF when you send an ON command.
For maintained CCOs, sending ON and OFF commands works as expected, and the channel state updates as expected when either openHAB commands or external events change the CCO device state.
Thing configuration file example:
Thing cco garage [ integrationId=5, outputType="Pulsed", pulseLength=0.5 ]
Thing cco relay1 [ integrationId=7, outputType="Maintained"]
# Shades
Each Lutron shade, motorized drape, or QS motor controller output (LQSE-4M-D) is controlled by a shade thing.The only configuration parameter it accepts is integrationId.
A single channel shadelevel with item type Rollershutter and category Rollershutter will be created for each shade thing.It accepts Percent, Up, Down, Stop and Refresh commands.Sending a Percent command will cause the shade to immediately move so as to be open the specified percentage.You can also read the current shade level from the channel.It is specified as a percentage, where 0% = closed and 100% = fully open. Movement delays are not currently supported.The shade handler should be compatible with all Lutron devices which appear to the system as shades, including roller shades, honeycomb shades, pleated shades, roman shades, tension roller shades, drapes, and Kirbe vertical drapes.
Motor controller outputs on a LQSE-4M-D (HomeWorks QS only) behave similarly to a shade.The only difference is that percentages other than 0% and 100% will be ignored, since arbitrary positioning is not supported by the hardware.The value of shadelevel for a motor will likewise always be either 0% or 100%, depending on whether the last command sent was Up or Down.
Note: While a shade is moving to a specific level because of a Percent command, the system will report the target level for the shade rather than the actual current level.While a shade is moving because of an Up or Down command, it will report the previous level until it stops moving.
Thing configuration file example:
Thing shade libraryshade [ integrationId=33]
# Blinds [Experimental]
Each Lutron Sivoia QS Venetian Blind or Horizontal Sheer Blind is controlled by a blind thing.Besides integrationId, it requires that the parameter type be set to either "Venetian" for venetian blinds or "Sheer" for horizontal sheer blinds.There is no default.If discovery is used, the type parameter will set automatically when the blind thing is created.
Two channels, blindliftlevel and blindtiltlevel, with item type Rollershutter and category Rollershutter will be created for each blind thing.They control the up/down motion and the slat tilt motions of the blinds, respectively.Each channel accepts Percent, Up, Down, Stop and Refresh commands.Sending a Percent command will cause the blind to immediately move so as to be open the specified percentage.You can also read the current setting from each channel.It is specified as a percentage, where 0% = closed and 100% = fully open. Movement delays are not currently supported.
Note: While a blind is moving to a specific level because of a Percent command, the Lutron system will report the target position for the blind rather than the actual current position.While a blind is moving because of an Up or Down command, it will report the previous level until it stops moving.
Note: Support for Sivoia QS blinds is new and has been through very limited testing.Please comment on your use of it in the openHAB community forum.
Thing configuration file example:
Thing blind officeblinds [ integrationId=76, type="Venetian"]
# Green Mode
Radio RA2 and HomeWorks QS systems have a "Green Mode" or "Green Button" feature which allows the system to be placed in to one or more user-defined power saving modes called "steps". Each step can take actions such as trimming down the 100% level on selected lighting dimmers by a specified percentage, shutting off certain loads, modifying thermostat settings, etc. Typically step 1 is "Off" or "Normal", and step 2 is "Green Mode", however other steps may be defined by the installer as desired.
The greenmode thing is used to interface to the green mode subsystem.It requires that the integrationId parameter be set to the ID of the green mode subsystem.This should generally be 22.It creates a single channel step that can be used to set or query the active green mode step number.
Unlike other Lutron system state settings, the binding is not automatically notified by the bridge device of changes to the current green mode step.This may be due to a bug in the Lutron firmware.The handler can be set to poll for the active green mode step so that the binding will know if it has been changed by another station.The optional pollInterval configuration parameter controls how often the handler polls.It can be set to anywhere between 0 and 240 minutes, and defaults to 15 minutes.A setting of 0 will disabled polling.You can also initiate a poll at any time by sending a refresh command (RefreshType.REFRESH) to the step channel.Note that it should usually be unnecessary for the poll interval to be set to less than 5-10 minutes, since the green mode step typically changes rather infrequently and takes effect gradually.
Thing configuration file example:
Thing greenmode greenmode [ integrationId=22 ]
# Timeclock
RadioRA 2 and Homeworks QS have timeclock subsystems that provide scheduled execution of tasks at set times, randomized times or at arbitrary offsets from local sunrise/sunset. The tasks executed depend on the currently selected timeclock mode (e.g. Normal, Away, Suspend) and the modes themselves are user-definable (RadioRA 2 only). In addition, tasks can be individually executed, and enabled or disabled for scheduled execution.
The timeclock thing provides an interface to timeclock functions.It allows you to get and set the current timeclock mode, get the current day's sunrise and sunset times, execute a specific task, be notified when a task executes, and enable or disable tasks.The integrationId parameter must be set to the ID of the timeclock subsystem.
It creates the following six channels:
clockmode- Gets or sets the current timeclock mode.
sunrise- The timeclock's local sunrise time for the current day. Read only. You must send a refresh command (RefreshType.REFRESH) to query the system for the current day's sunrise time, as it is not automatically updated.
sunset- The timeclock's local sunset time for the current day. Read only. You must send a refresh command to query the system for the current day's sunset time, as it is not automatically updated.
execevent- Updates with the index number of each executing event. Send an event's index number to start execution of it.
enableevent- Updates with an event's index number when it is enabled. Send an event's index number to enable it.
disableevent- Updates with an event's index number when it is disabled. Send an event's index number to disable it.
All channels except clockmode are marked as advanced.
Thing configuration file example:
Thing timeclock timeclock [ integrationId=23 ]
Example rule to refresh sunrise/sunset channels daily and at restart:
import org.openhab.core.types.RefreshType
rule "Lutron sunrise/sunset daily refresh"
when
// Trigger at time 00:05:00 every day
Time cron "0 5 0 * * ?" or
Thing "lutron:timeclock:70acb5a7:23" changed to ONLINE
then
Timeclock_Sunrise.sendCommand(RefreshType.REFRESH)
Timeclock_Sunset.sendCommand(RefreshType.REFRESH)
end
# System State Variables (HomeWorks QS only) [Experimental]
HomeWorks QS systems allow for conditional programming logic based on state variables.The sysvar thing allows state variable values to be read and set from openHAB.This makes sophisticated integration schemes possible.Each sysvar thing represents one system state variable.It has a single channel varstate with type Number and category Number.Automatic discovery of state variables is not yet supported.They must be manually configured.
Thing configuration file example:
Thing sysvar qsstate [ integrationId=80 ]
# Channels
The following is a summary of channels for all RadioRA 2 binding things:
Thing Channel Item Type Description
dimmer lightlevel Dimmer Increase/decrease the light level
switch switchstatus Switch On/off status of the switch
fan fanspeed String Set/get fan speed using string options
fan fanlevel Dimmer Set/get fan speed using a dimmer channel
occupancysensor occupancystatus Switch Occupancy sensor status
ogroup groupstate String Occupancy group status
cco switchstatus Switch On/off status of the CCO
keypads (all) button* Switch Keypad button
keypads(except pico) led* Switch LED indicator for the associated button
vcrx cci* Contact Contact closure input on/off status
shade shadelevel Rollershutter Level of the shade (100% = full open)
blind blindliftlevel Rollershutter Level of the blind (100% = full open)
blind blindtiltlevel Rollershutter Tilt of the blind slats
greenmode step Number Get/set active green mode step number
timeclock clockmode Number Get/set active clock mode index number
timeclock sunrise DateTime Get the timeclock's sunrise time
timeclock sunset DateTime Get the timeclock's sunset time
timeclock execevent Number Execute event or monitor events executed
timeclock enableevent Number Enable event or monitor events enabled
timeclock disableevent Number Disable event or monitor events disabled
sysvar varstate Number Get/set system state variable value
The channels available on each keypad device (i.e. keypad, ttkeypad, intlkeypad, grafikeyekeypad, pico, vcrx, and virtualkeypad) will vary with keypad type and model.Appropriate channels will be created automatically by the keypad, ttkeypad, intlkeypad, grafikeyekeypad, and pico thing handlers based on the setting of the model parameter for those thing types.
# Commands supported by channels
Thing Channel Native Type Accepts
dimmer lightlevel PercentType OnOffType, PercentType (rounded/truncated to integer)
switch switchstatus OnOffType OnOffType
fan fanspeed StringType "OFF","LOW","MEDIUM","MEDIUMHIGH","HIGH"
fan fanlevel PercentType OnOffType, PercentType
occ. sensor occupancystatus OnOffType (readonly)
ogroup groupstate StringType "OCCUPIED","UNOCCUPIED","UNKNOWN" (readonly)
cco switchstatus OnOffType OnOffType, RefreshType
keypads button* OnOffType OnOffType
led* OnOffType OnOffType, RefreshType
cci* OpenClosedType (readonly)
shade shadelevel PercentType PercentType, UpDownType, StopMoveType.STOP, RefreshType
blind blindliftlevel PercentType PercentType, UpDownType, StopMoveType.STOP, RefreshType
blindtiltlevel PercentType PercentType, UpDownType, StopMoveType.STOP, RefreshType
greenmode step DecimalType DecimalType, OnOffType (ON=2,OFF=1), RefreshType
timeclock clockmode DecimalType DecimalType, RefreshType
sunrise DateTimeType RefreshType (readonly)
sunset DateTimeType RefreshType (readonly)
execevent DecimalType DecimalType
enableevent DecimalType DecimalType
disableevent DecimalType DecimalType
sysvar varstate DecimalType DecimalType (rounded/truncated to integer)
Most channels receive immediate notifications of device state changes from the Lutron control system.The only exceptions are greenmode step, which is periodically polled and accepts REFRESH commands to initiate immediate polling, and timeclock sunrise and sunset, which must be polled daily using REFRESH commands to retrieve current values.Many other channels accept REFRESH commands to initiate a poll, but sending one should not normally be necessary.
# RadioRA 2/HomeWorks QS Configuration File Examples:
demo.things:
Bridge lutron:ipbridge:radiora2 [ ipAddress="192.168.1.123", user="lutron", password="integration" ] {
Thing dimmer lrtable "Table Lamp" @ "Living Room" [ integrationId=45, fadeInTime=0.5, fadeOutTime=5 ]
Thing dimmer lrtorch "Torch Lamp" @ "Living Room" [ integrationId=44, fadeInTime=0.5, fadeOutTime=5 ]
Thing dimmer lrspot [ integrationId=38, fadeInTime=0.5, fadeOutTime=5 ]
Thing switch path [ integrationId=61 ]
Thing keypad entrykeypad [ integrationId=64, model="W7B", autorelease=true ]
Thing ttkeypad bedroomkeypad [ integrationId=28, model="T15RL", autorelease=true ]
Thing pico librarypico [ integrationId=71, model="3BRL", autorelease=true ]
Thing vcrx vcrx1 [ integrationId=34, autorelease=true ]
Thing cco garage1 [ integrationId=75, outputType="Pulsed", pulseLength=0.5 ]
Thing shade libraryshade1 [ integrationId=66]
Thing greenmode greenmode [ integrationId=22 ]
Thing timeclock timeclock [ integrationId=23 ]
Thing occupancysensor laundryocc [ integrationId=62 ]
}
demo.items:
Dimmer LivingRm_TableLamp "Table Lamp" { channel="lutron:dimmer:radiora2:lrtable:lightlevel" }
Switch FrontYard_PathLight "Path Light" { channel="lutron:switch:radiora2:path:switchstatus" }
Switch LaundryRm_Sensor "Occ Sensor" { channel="lutron:occupancysensor:radiora2:laundryocc:occupancystatus" }
Switch Entryway_Keypad_B1 "Keypad Button 1" { channel="lutron:keypad:radiora2:entrykeypad:button1" }
Switch Entryway_Keypad_L1 "Keypad LED 1" { channel="lutron:keypad:radiora2:entrykeypad:led1" }
Contact Vcrx1_CCI1 "Input 1" { channel="lutron:vcrx:radiora2:vcrx1:cci1" }
Switch Garage_CCO1 "Garage Door" { channel="lutron:cco:radiora2:garage1:switchstatus" }
DateTime Timeclock_Sunrise "Sunrise" { channel="lutron:timeclock:radiora2:timeclock:sunrise" }
DateTime Timeclock_Sunset "Sunset" { channel="lutron:timeclock:radiora2:timeclock:sunset" }
Number Timeclock_Clockmode "Clock Mode" { channel="lutron:timeclock:radiora2:timeclock:clockmode" }
Number Greenmode_Step "Green Step" { channel="lutron:greenmode:radiora2:greenmode:step" }
Rollershutter Lib_Shade1 "Shade 1" { channel="lutron:shade:radiora2:libraryshade1:shadelevel" }
dimmerAction.rules:
rule "Test dimmer action"
when
Item TestSwitch received command ON
then
val actions = getActions("lutron","lutron:dimmer:radiora2:lrtable")
actions.setLevel(100, 5.5, 0)
end
# Lutron RadioRA (Classic) Binding
This binding integrates with the legacy Lutron RadioRA (Classic) lighting system.
This binding depends on RS232 communication. It has only been tested using the Chronos time module but the RS232 module should work as well.
# Supported Things
This binding currently supports the following thing types:
Thing Type ID Description
ra-rs232 Bridge RadioRA device that supports RS232 communication
ra-dimmer Thing Dimmer control
ra-switch Thing Switch control
ra-phantomButton Thing Phantom Button to control multiple controls (Scenes)
# Thing Configurations
Thing Config Description
ra-rs232 portName The serial port to use to communicate with Chronos or RS232 module
baud (Optional) Baud Rate (defaults to 9600)
ra-dimmer zoneNumber Assigned Zone Number within the Lutron RadioRA system
fadeOutSec (Optional) Time in seconds dimmer should take when lowering the level
fadeInSec (Optional) Time in seconds dimmer should take when lowering the level
ra-switch zoneNumber Assigned Zone Number within the Lutron RadioRA system
ra-phantomButton buttonNumber Phantom Button Number within the Lutron RadioRA system
# Channels
The following channels are supported:
Thing Type Channel ID Item Type Description
ra-dimmer lightlevel Dimmer Increase/Decrease dimmer intensity
ra-switch/ra-phantomButton switchstatus Switch On/Off state of switch
# Example
lutronradiora.things
Bridge lutronradiora:ra-rs232:chronos1 [portName="/dev/ttys002"] {
Thing ra-dimmer dimmer1 [ zoneNumber=1 ]
Thing ra-dimmer dimmer2 [ zoneNumber=2 ]
Thing ra-switch switch1 [ zoneNumber=3 ]
Thing ra-switch switch2 [ zoneNumber=4 ]
Thing ra-phantomButton1 phantomButton1 [ buttonNumber=1 ]
}
lutronradiora.items
Dimmer Dimmer_Kitchen "Kitchen Lights" { channel="lutronradiora:dimmer:chronos1:dimmer1:lightlevel" }
Dimmer Dimmer_FamilyRoom "Family Room Lights" { channel="lutronradiora:dimmer:chronos1:dimmer2:lightlevel" }
Switch Switch_Patio "Patio Light" { channel="lutronradiora:dimmer:chronos1:switch1:switchstatus" }
Switch Switch_FrontDoor "Front Door Lights" { channel="lutronradiora:switch:chronos1:switch2:switchstatus" }
Switch Phantom_Movie "Movie Scene" { channel="lutronradiora:phantomButton:chronos1:phantomButton1:switchstatus" }
# Legacy HomeWorks RS232 (Serial) Processors
The binding supports legacy HomeWorks processors that interface with a Serial RS232 connection. To connect to such a system, you would need to use a RS232 -> USB adapter (assuming you don't have a serial port).
Please see HomeWorks RS232 Protocol Guide (opens new window) for information on the protocol.
# Supported Things
HomeWorks RS232-connected Processor Units
Dimmers
Supported in future updates:
Keypads
Keypad LEDs
# Discovery
This binding supports active and passive discovery. It will detect dimmers as they are manually raised or lowered, or can be made to scan for configured dimmer modules.
# Thing Configuration
The bridge requires the port location (e.g., /dev/ttyUSB1 or COM1) and the baud rate. The default baud rate for HomeWorks processors is set to 9600.
lutron:hwserialbridge:home [serialPort="/dev/ttyUSB1", baudRate="9600]
Dimmers have one required parameter address that specifies the device address (e.g., [01:01:03:02:04]) and two optional parameters: fadeTime which sets the time it takes to set the light level when changed, and defaultLevel which sets the level to use for the dimmer when turning it on (with a switch rather than a slider).
lutron:hwdimmer:dimmer1 [address="[01:01:03:02:04]", fadeTime="1", defaultLevel="75"]
# Channels
The following channels are supported:
Thing Type Channel Type ID Item Type Description
dimmer lightlevel Dimmer Increase/decrease the light level
# Lutron Grafik Eye 3x/4x binding via GRX-PRG or GRX-CI-PRG
This lutron binding will also work with Grafik Eye 3x/4x systems in conjuction with the GRX-PRG or GRX-CI-PRG interfaces. Please see RS232ProtocolCommandSet (opens new window) for more information.
# Supported Things
1-8 Grafik Eye 3x/4x System(s) through the interface
# Discovery
This binding does not support discovery of the GRX-PRG or GRX-CI-PRG. You will need to specify them directly.
# Thing Configuration
The bridge requires the IP address/Host name of the bridge. Optionally, you may specify the username (defaults to 'nwk') and retryPolling (in seconds) to retry connections if the connection fails (defaults to 10 seconds). This bridge does support two way communication with the Grafik Eye units (if a scene is selected or a zone changed on the unit or via a keypad, that information is immediately available in openHAB).
lutron:prgbridge:home [ ipAddress="192.168.1.51", user="nwk", retryPolling=10 ]
The Grafik Eye thing requires the control unit address (1-8). Optionally you may specify the default fade time (when raising/lowering zones or setting zone intensities) and polling time (in seconds) to refresh the state from the Grafik Eye (defaults to 30 seconds). If any of the zones control a QED shade (via the SG/SO-SVCN/SVCI keypad), those zones should be listed (comma separated list) in the shadeZones.
lutron:grafikeye:home (lutron:prgbridge:home) [ controlUnit=1, fade=10, polling=30, shadeZones="2,3,4" ]
# Channels
# Bridge channels
Channel Type ID Readonly Item Type Description
zonelowerstop No Switch Stops zone lowering on all control units
zoneraisestop No Switch Stops zone raising on all control units
timeclock No DateTime Current time on the PRG
schedule No Number Current Schedule (0=Disabled, 1=Weekday, 2=Weekend)
sunrise Yes DateTime Time of Sunrise
sunset Yes DateTime Time of Sunset
ssstart No Switch Starts the Super Sequence
sspause No Switch Pauses the Super Sequence
ssresume No Switch Resumes the Super Sequence
ssstatus Yes String Status of the Super Sequence (R=Running, S=Stopped)
ssnextstep Yes Number Next sequence number in the Super Sequence
ssnextminute Yes Number How many minutes until the next step in the Super Sequence
ssnextsecond Yes Number How many seconds until the next step in the Super Sequence
buttonpress Yes String Last keypad button pressed (see Appendix A) in protocol guide
# Grafik Eye channels
Channel Type ID Readonly Item Type Description
scene No Number The current scene
scenelock No Switch Locks/unlocks the current scene
sceneseq No Switch Starts/Stops the scene sequence
zonelock No Switch Locks/unlocks the zones
zonefade No Number The seconds to fade from one intensity to the next
zonelowerX No Switch Lowers the specified zone
zoneraiseX No Switch Raises the specified zone
zoneintensityX No Number Specifies the zone intensity
zoneshadeX No Rollershutter Specifies the shade zone
# Notes
The "buttonpress" channel reports which keypad button was pressed. DIP switch 6 must be set on the interface for this to be reported. The "buttonpress" channel is only useful in rules to take action when a specific button (on a specific keypad) has been pressed.
Sunset/sunrise will only be available if configured via the Liasion software
scenelock, sceneseq, zonelock cannot be determined from the API and will default to OFF on startup
Replace the "X" on zonelowerX, zoneraiseX, etc with the zone in question. "zonelower1" will affect zone 1. Specifying a zone larger than you have will have no effect (such as using zonelower8 on a Grafik Eye 3506 which only has 6 zones).
The zonefade value will only be used when zonelower/zonereaise/zoneintensity is issued.
zoneshade does not support PercentType nor StopMoveType.Move and those commands will be ignored
zoneintensity can be used on a shade zone if the intensity is from 0 to 5 and should be used if wanting to set a QED preset: 0=Stop, 1=Open, 2=Close, 3=Preset 1, 4=Preset 2, 5=Preset 3
If you started a zonelower or zoneraise, the only way to stop the action is by executing an all zone stop on the bridge (i.e. zonelowerstop or zoneraisestop). The PRG API does not provide a way to stop the lowering/raising of any specific zone.
# Example
demo.Things:
lutron:prgbridge:home [ ipAddress="192.168.1.51", user="nwk", retryPolling=10 ]
lutron:grafikeye:home (lutron:prgbridge:home) [ controlUnit=1, fade=10, polling=10 ]
demo.items:
String Prg_ButtonPress "Last Button Press [%s]" { channel = "lutron:prgbridge:home:buttonpress" }
Switch Prg_ZoneLowerStop "Zone Lower Stop" { channel = "lutron:prgbridge:home:zonelowerstop",autoupdate="false" }
Switch Prg_ZoneRaiseStop "Zone Raise Stop" { channel = "lutron:prgbridge:home:zoneraisestop",autoupdate="false" }
DateTime Prg_Time "Current Time: [%1$tF %1$tr]" { channel="lutron:prgbridge:home:timeclock" }
Number Prg_Schedule "Schedule [%s]" { channel="lutron:prgbridge:home:schedule" }
DateTime Prg_Sunrise "Sunrise [%1$tF %1$tr]" { channel="lutron:prgbridge:home:sunrise" }
DateTime Prg_Sunset "Sunset [%1$tF %1$tr]" { channel="lutron:prgbridge:home:sunset" }
Switch Prg_Start "Super Schedule Start" { channel="lutron:prgbridge:home:ssstart", autoupdate="false" }
Switch Prg_Pause "Super Schedule Pause" { channel="lutron:prgbridge:home:sspause", autoupdate="false" }
Switch Prg_Resume "Super Schedule Resume" { channel="lutron:prgbridge:home:ssresume", autoupdate="false" }
String Prg_Status "Super Schedule Status [%s]" { channel="lutron:prgbridge:home:ssstatus" }
Number Prg_NextStep "Super Schedule Next Step [%s]" { channel="lutron:prgbridge:home:ssnextstep" }
Number Prg_NextMinute "Super Schedule Next Step Minutes [%s]" { channel="lutron:prgbridge:home:ssnextminute" }
Number Prg_NextSecond "Super Schedule Next Step Seconds [%s]" { channel="lutron:prgbridge:home:ssnextsecond" }
Number Grx_Scene "Scene [%s]" { channel="lutron:grafikeye:home:scene" }
Switch Grx_SceneLock "Scene Lock" { channel="lutron:grafikeye:home:scenelock" }
Switch Grx_SceneSeq "Scene Sequence" { channel="lutron:grafikeye:home:sceneseq" }
Switch Grx_ZoneLock "Zone Lock" { channel="lutron:grafikeye:home:zonelock" }
Switch Grx_ZoneLower1 "Zone 1 Lower" { channel="lutron:grafikeye:home:zonelower1" }
Switch Grx_ZoneLower2 "Zone 2 Lower" { channel="lutron:grafikeye:home:zonelower2" }
Switch Grx_ZoneLower3 "Zone 3 Lower" { channel="lutron:grafikeye:home:zonelower3" }
Switch Grx_ZoneLower4 "Zone 4 Lower" { channel="lutron:grafikeye:home:zonelower4" }
Switch Grx_ZoneLower5 "Zone 5 Lower" { channel="lutron:grafikeye:home:zonelower5" }
Switch Grx_ZoneLower6 "Zone 6 Lower" { channel="lutron:grafikeye:home:zonelower6" }
Switch Grx_ZoneLower7 "Zone 7 Lower" { channel="lutron:grafikeye:home:zonelower7" }
Switch Grx_ZoneLower8 "Zone 8 Lower" { channel="lutron:grafikeye:home:zonelower8" }
Switch Grx_ZoneRaise1 "Zone 1 Raise" { channel="lutron:grafikeye:home:zoneraise1" }
Switch Grx_ZoneRaise2 "Zone 2 Raise" { channel="lutron:grafikeye:home:zoneraise2" }
Switch Grx_ZoneRaise3 "Zone 3 Raise" { channel="lutron:grafikeye:home:zoneraise3" }
Switch Grx_ZoneRaise4 "Zone 4 Raise" { channel="lutron:grafikeye:home:zoneraise4" }
Switch Grx_ZoneRaise5 "Zone 5 Raise" { channel="lutron:grafikeye:home:zoneraise5" }
Switch Grx_ZoneRaise6 "Zone 6 Raise" { channel="lutron:grafikeye:home:zoneraise6" }
Switch Grx_ZoneRaise7 "Zone 7 Raise" { channel="lutron:grafikeye:home:zoneraise7" }
Switch Grx_ZoneRaise8 "Zone 8 Raise" { channel="lutron:grafikeye:home:zoneraise8" }
Number Grx_ZoneFade "Zone Fade [%s sec]" { channel="lutron:grafikeye:home:zonefade" }
Dimmer Grx_ZoneIntensity1 "Zone 1 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity1" }
Dimmer Grx_ZoneIntensity2 "Zone 2 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity2" }
Dimmer Grx_ZoneIntensity3 "Zone 3 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity3" }
Dimmer Grx_ZoneIntensity4 "Zone 4 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity4" }
Dimmer Grx_ZoneIntensity5 "Zone 5 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity5" }
Dimmer Grx_ZoneIntensity6 "Zone 6 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity6" }
Dimmer Grx_ZoneIntensity7 "Zone 7 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity7" }
Dimmer Grx_ZoneIntensity8 "Zone 8 Intensity [%d %%]" { channel="lutron:grafikeye:home:zoneintensity8" }
Rollershutter Grx_ZoneShade1 "Zone 1 Shade" { channel="lutron:grafikeye:home:zoneshade1" }
Rollershutter Grx_ZoneShade2 "Zone 2 Shade" { channel="lutron:grafikeye:home:zoneshade2" }
Rollershutter Grx_ZoneShade3 "Zone 3 Shade" { channel="lutron:grafikeye:home:zoneshade3" }
Rollershutter Grx_ZoneShade4 "Zone 4 Shade" { channel="lutron:grafikeye:home:zoneshade4" }
Rollershutter Grx_ZoneShade5 "Zone 5 Shade" { channel="lutron:grafikeye:home:zoneshade5" }
Rollershutter Grx_ZoneShade6 "Zone 6 Shade" { channel="lutron:grafikeye:home:zoneshade6" }
Rollershutter Grx_ZoneShade7 "Zone 7 Shade" { channel="lutron:grafikeye:home:zoneshade7" }
Rollershutter Grx_ZoneShade8 "Zone 8 Shade" { channel="lutron:grafikeye:home:zoneshade8" }
Lutron, GRAFIK Eye, HomeWorks, HomeWorks QS, RadioRA, RadioRA 2, RA2 Select, Caseta, Sivoia QS, Serena, seeTouch, Pico, and Quantum are trademarks of Lutron Electronics Co., Inc. HomeLink is a registered trademark of Gentex Corporation. This software and its associated documentation are not endorsed or approved by Lutron Electronics Co.
Caught a mistake or want to contribute to the documentation? Edit this page on GitHub (opens new window) |
CKIP ALBERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
Homepage
Contributers
Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws')
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
Downloads last month
0 |
Recent Posts
Recent Comments
Today
13
Total
3,503
ì¼ ì í ì 목 ê¸ í
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
Tags
Algorithm
Dashboard
ë°±ì¤
ì¤íë² ì¤
íì´ì¬
backtracking
kocw
íë¸ë¡
BFS
programmers
Visualization
algirithm
ê°ì
OS
BOJ
ìµìë¹ì©
íë¡ê·¸ë머ì¤
ì¤í°ë
operation systems
ì¼ì±ê¸°ì¶
ìê³ ë¦¬ì¦
tableau
ëì ê³íë²
ì´ìì²´ì
ìê°í
dfs
ì½ë©í ì¤í¸
ëìë³´ë
python
ë°ì´í°
ê´ë¦¬ ë©ë´
ë°ì´í° ìì§ëì´
ë°±ì¤ - [Silver 3] 2579ë² ê³ë¨ ì¤ë¥´ê¸° 본문
íë¡ê·¸ëë°(Programming)/ìê³ ë¦¬ì¦(Algorithm)
ë°±ì¤ - [Silver 3] 2579ë² ê³ë¨ ì¤ë¥´ê¸°
Data Engineer kingsmo 2020. 9. 22. 00:07
문ì ë§í¬: www.acmicpc.net/problem/2579
문ì ì¤ëª
- N: ê³ë¨ì ê°ì
- Nê°ì ê³ë¨ì ì ìê° ì£¼ì´ì§.
ì¡°ê±´
1. ê³ë¨ì í ë²ì í ê³ë¨ or ë ê³ë¨ ì¤ë¥´ê¸° ê°ë¥
2. ì°ìë ì¸ê³ì ê³ë¨ì 모ë ë°ììë ì ëë¤.
3. ë§ì§ë§ ëì°© ê³ë¨ì ë°ëì ë°ìì¼ í¨.
ìë 그림과 ê°ì´ ë§ì§ë§ ì§ì ì ëì°©íì ë í´ë¹ ì¡°ê±´ì ì§í¤ë©° ê° ì ìë ì ìì ìµëê°ì 구íë©´ ë©ëë¤.
íì´
DP(ë¤ì´ë믹 íë¡ê·¸ëë°) ëìê° ë¬¼ì¬ ëë 문ì ììµëë¤.
ë§ì§ë§ ì§ì ì 무조건 ëì°©í´ì¼í¨ê³¼ ì°ì ì¸ ê°ì ê³ë¨ì ë°ì§ 못íë ì ì ëª ì¬í´ì ìê°í´ë³´ë©´.....
ë§ì§ë§ ì§ì = ë§ì§ë§ ì ì + 2ê°ì§ ê²½ì° ì¤ ìµë
1. ë¤ìì 1ë²ì§¸ ê³ë¨(ì ìê° 10ì¸ ì§ì ) + ë¤ìì 3ë²ì§¸ ê³ë¨(ì ìê° 15ì¸ ì§ì ) ê¹ì§ì maxê°
2. ë¤ìì 2ë²ì§¸ ê³ë¨(ì ìê° 25ì¸ ì§ì ) ê¹ì§ì maxê°
ì íìì ì¸ìë³´ë©´ ìëì ê°ì´ ëì¨ë¤.
DP[i] = stairs[i] + max(DP[i-2], stairs[i-1] + DP[i-3]))
ê·¸ë¦¬ê³ ì´ê¸°í를 í ë ì¸ë²ì§¸ ê³ë¨ê¹ì§ í´ì£¼ëë°, 길ì´ê° 2ì´íì¸ ê²½ì°ë ë°ë¡ ìì¸ì²ë¦¬ë¡ ê³ì°í´ 주ì´ì¼ í©ëë¤.
ì½ë
from sys import stdin
stdin = open("input.txt", "r")
# N: ê³„ë‹¨ì˜ ê°œìˆ˜
N = int(stdin.readline())
stairs = [int(stdin.readline()) for _ in range(N)]
if N <= 2:
print(sum(stairs))
else:
DP = []
DP.append(stairs[0]) # 첫번째 계단
DP.append(stairs[0] + stairs[1]) # ë‘번째 계단 ê¹Œì§€ì˜ ìµœëŒ“ê°’
DP.append(max(stairs[2] + stairs[1], stairs[2] + stairs[0])) # 세번째 ê³„ë‹¨ì€ 1, 2 or 0, 2 ë‘가지 경우로 ì´ë£¨ì–´ì§ˆ 수 있ìŒ.
for i in range(3, N):
# print(DP)
DP.append(stairs[i] + max(DP[i-2], stairs[i-1] + DP[i-3]))
print(DP[-1])
'íë¡ê·¸ëë°(Programming) > ìê³ ë¦¬ì¦(Algorithm)' ì¹´í ê³ ë¦¬ì ë¤ë¥¸ ê¸
íë¡ê·¸ëë¨¸ì¤ - [LEVEL 2] 기ë¥ê°ë° (0) 2020.09.24
ë°±ì¤ - [Platinum 3] 5446ë² ì©ë ë¶ì¡± (0) 2020.09.24
ë°±ì¤ - [Silver 3] 2579ë² ê³ë¨ ì¤ë¥´ê¸° (0) 2020.09.22
ë°±ì¤ - [Gold 4] ì íë²í¸ ëª©ë¡ (0) 2020.09.21
ë°±ì¤ - [Gold 2] 12100ë² 2048(Easy) (0) 2020.09.20
ë°±ì¤ - [Platinum 5] 11003ë² ìµìê° ì°¾ê¸° (0) 2020.09.18
0 Comments |
NewerOlder
1 2 3 4 5 6 7 8 9 10 11 12 13
#! /usr/bin/python
#~ import pandoc
from pandoc import *
def main ():
newDocument('title\nof the document', ['author1', 'author2'], '25 june\n2017')
addTOC()
newPage()
header ('Test quote Block', 1, '#header1')
quoteBlock (['text 1', ['text 2', ['text 3'], 'text 4'], 'text 5'])
14 15
#~ rawText ('test header reference [header1]')
rawText ('"blabla"')
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243
pageBreak()
header ('Test code Block')
header ('with string', 2)
code1 = 'if (a > 3) { \n moveShip(5 * gravity, DOWN); \n}'
codeBlock (code1)
header ('with list', 2)
code2 = ['if (a > 3) {', ' moveShip(5 * gravity, DOWN);', '}']
codeBlock (code2)
header ('fenced with string', 2)
codeBlockFenced (code1)
newPage()
header ('Test line Block')
header ('with string', 2)
lineBlock1 = 'line 1 \nline 2 \nline 3'
lineBlock (lineBlock1)
header ('with list', 2)
lineBlock2 = ['line 1', 'line 2', 'line 3']
lineBlock (lineBlock2)
newPage()
header ('Test list Block')
header ('bullet list', 2)
testList1 = ['ab', ['a', ['a', 'b'], 'b', 'c'], 'b', 'c']
listBlock(testList1)
header ('ordered list', 2)
testList2 = ['ab', 'bc', 'cd']
orderedlistBlock(testList2)
newPage()
header ('horizontal rules')
horizontalRule ()
horizontalRule ('*', 4)
horizontalRule ('_', 20)
newPage()
header ('tables')
header ('simple table', 2)
header ('with header', 3)
table1_1 = [ \
[['t1', 'right'],['title2', 'left'],['t3', 'center'], ['t4', 'default']], \
['12', '12', '123', '12'], \
['123', '123', '1234', '1234'], \
['1', '1', '1', '1'] \
]
table1_2 = [ \
[['tt1', 'right', 10],['title2', 'left', 1],['t3', 'center', 10], ['t4', 'default', 40]], \
['12', '12', '123', '12'], \
['123', '123', '1234', '1234'], \
['1', '1', '1', '1'] \
]
table(table1_1, 'table 1_1 caption')
newLine()
table(table1_2, 'table 1_2 caption')
newLine()
header ('without header', 3)
table2 = [ \
[['', 'right', 10],['', 'left', 5],['', 'center', 2], ['', 'default', 20]], \
['12', '12', '123', '12'], \
['123', '123', '1234', '1234'], \
['1', '1', '1', '1'] \
]
table(table2, 'test', ' ')
newPage()
header ('multiline table', 2)
header ('with header', 3)
table3_1 = [ \
[['Centered\nHeader', 'center'],['Default\nAligned', 'default'],['Right\nAligned\ntest', 'right'], ['Left\nAligned', 'left']], \
['First', 'rowbidule', '12.0', 'Example of a row that\nspans multiple lines.'], \
['Second', 'row', '5.0', 'Here\'s another one. Note\nthe blank line between\nrows.'], \
]
table(table3_1)
header ('without header', 3)
table3_2 = [ \
[['', 'center'],['', 'default'],['', 'right', 20], ['', 'left', 50]], \
['First', 'row', '12.0', 'Example of a row that\nspans multiple lines.'], \
['Second', 'row', '5.0', 'Here\'s another one. Note\nthe blank line between\nrows.'], \
]
table(table3_2)
header ('grid table', 2)
header ('with header', 3)
table4_1 = [ \
[['Fruit', 'default'],['Price', 'default'],['Advantages', 'default']], \
['Bananas', '$1.34', '- built-in wrapper\n- bright color'], \
['Oranges', '$2.10', '- cures scurvy\n- tasty'], \
]
table(table4_1, '', ' ', 'grid')
table4_2 = [ \
[['Fruit', 'left', 20],['Price', 'center'],['Advantages', 'right', 30]], \
['Bananas', '$1.34', '- built-in wrapper\n- bright color'], \
['Oranges', '$2.10', '- cures scurvy\n- tasty'], \
]
table(table4_2, '', ' ', 'grid')
header ('without header', 3)
table4_3 = [ \
[['', 'left'],['', 'center'],['', 'right']], \
['Bananas', '$1.34', '- built-in wrapper\n- bright color'], \
['Oranges', '$2.10', '- cures scurvy\n- tasty'], \
]
#~ table4_3 = [ \
#~ [['', 'left', 20],['', 'center'],['', 'right', 30]], \
#~ ['Bananas', '$1.34', '- built-in wrapper\n- bright color'], \
#~ ['Oranges', '$2.10', '- cures scurvy\n- tasty'], \
#~ ]
table(table4_3, '', ' ', 'grid')
header ('pipe table', 2)
header ('with header', 3)
table5_1 = [ \
[['Right', 'right'],['Left', 'left'],['Default', 'default'], ['Center', 'center']], \
['12', '12', '123', '12'], \
['123', '123', '123', '123'], \
['1', '1', '1', '1'] \
]
table(table5_1, '', ' ', 'pipe')
table5_2 = [ \
[['Right\ntruc', 'right'],['Left', 'left', 30],['Default', 'default'], ['Center', 'center']], \
['12', '12', '123', '12'], \
['123', '123', '123', '123'], \
['1', '1', '1', '1'] \
]
table(table5_2, '', ' ', 'pipe')
header ('without header', 3)
table5_3 = [ \
[['', 'right'],['', 'left'],['', 'default'], ['', 'center']], \
['12', '12', '123', '12'], \
['123', '123', '123', '123'], \
['1', '1', '1', '1'] \
]
table(table5_3, '', ' ', 'pipe')
newPage()
header ('text formatting')
header ('emphase', 2)
rawText('this is an ' + italic('italic') + ' emphase')
newLine()
rawText('this is a ' + bold('bold') + ' emphase')
newLine()
header ('strikeout', 2)
rawText('this is a ' + strikeout('strike out'))
newLine()
header ('power and indice', 2)
rawText('power : 2' + power('10'))
newLine()
rawText('indice : H' + indice('2') + '0')
newLine()
header ('verbatim', 2)
rawText('verbatim text : ' + verbatim('this is a verbatim text') + '\n')
rawText('verbatim text : ' + verbatim('this is a `verbatim` text'))
newLine()
header ('small caps', 2)
rawText('small caps text : ' + smallCaps('this is a smallcaps text'))
header ('math', 2)
## tex math format shall be used with \\ for special char
formula = '\\sqrt{\\frac{x^2}{3}}'
rawText('math formula : ' + math(formula))
newLine()
newPage()
header ('references')
header ('automatic link', 2)
urlLink = url('http://google.com')
emailLink = email('sam@green.eggs.ham')
rawText('url : ' + urlLink)
rawText('email : ' + emailLink)
newLine()
header ('inline link', 2)
header ('url', 3)
inlineLink = url('http://fsf.org', 'inline link', 'click here for a good time!')
rawText('This is an ' + inlineLink)
header ('email', 3)
inlineLink = email('sam@green.eggs.ham', 'Write me !')
rawText(inlineLink)
newLine()
header ('reference link', 2)
header ('non implicit', 3)
referenceLink = reference('FSF', 'My Website')
rawText('See ' + referenceLink)
newLine()
header ('implicit', 3)
referenceLink = reference('My Website')
rawText('See ' + referenceLink)
newLine()
header ('internal link', 2)
headerLink = reference('#introduction', 'Introduction')
244
rawText ('See the ' + headerLink)
245
newLine()
246
rawText ('Or')
247 248
newLine()
headerLink = reference('Introduction')
249
rawText ('See the ' + headerLink)
250 251 252 253 254 255 256 257 258 259 260 261
newLine()
header ('images', 2)
imageRef = image('la_lune.jpg', 'la lune', 'Voyage to the moon')
rawText('An image : ' + imageRef)
newLine()
imageRef = image('la_lune.jpg', width = '50%')
rawText('An image : ' + imageRef)
newLine()
imageRef = image('Blade Runner')
rawText('An image : ' + imageRef)
newLine()
262 263 264
imageRef = image('logo.png', 'SVG Logo', 'SVG Logo', width = '50%')
rawText('An image : ' + imageRef)
newLine()
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317
header ('definition', 2)
definedReference('http://fsf.org', 'FSF', 'click here for a good time!')
definedReference('http://truc.org', 'My Website')
definedReference('#introduction', 'Introduction')
definedReference('BladeRunner.gif', 'Blade Runner', 'title', 'width=10cm height=20px')
newLine()
newPage()
header('footnote')
header('reference with label', 2)
f1 = footnote()
f2 = footnote('longFoot')
rawText('this is a simple footnote ' + f1 + ' and a long one ' + f2)
newLine()
header('inline footnote', 2)
f3 = footnote(text = 'Inlines notes are easier to write, since \nyou don\'t have to pick an identifier and move down to type the \nnote.')
rawText('Here is an inline note.' + f3)
newLine()
header('definition', 2)
definedFootnote('1', 'A simple footnote.')
definedFootnote('longFoot', 'Here\'s one with multiple blocks.\n\nSubsequent paragraphs are indented to show that they\nbelong to the previous footnote.\n\nThe whole paragraph can be indented, or just the first \nline. In this way, multi-paragraph footnotes work like \nmulti-paragraph list items.', '\n')
newLine()
newPage()
header('citations')
c1 = citations('See @doe99', 'pp. 33-35', '@smith04')
rawText('Blah blah ' + c1 + '\n')
c2 = citations('@doe99', '@smith04')
rawText('Blah blah ' + c2 + '\n')
c3 = citations('@doe99', 'pp. 33-35, 38-39')
rawText('Blah blah ' + c3 + '\n')
c4 = citations('-@doe99')
rawText('Blah blah ' + c4 + '\n')
addBibliography('./biblio.bib')
finalizeDocument()
#~ printDocument ()
#~ serialize ('test.md')
convert('test.pdf', 'latex')
if __name__ == "__main__":
main ()
#~ sys.exit (0); # exit
|
一、URL配置
URL配置(URLconf)就像Django 所支撑网站的目录。它的本质是URL与要为该URL调用的视图函数之间的映射表。你就是以这种方式告诉Django,对于这个URL调用这段代码,对于那个URL调用那段代码。
1.1基本格式
from django.conf.urls import url
#循环urlpatterns,找到对应的函数执行,匹配上一个路径就找到对应的函数执行,就不再往下循环了,并给函数传一个参数request,和wsgiref的environ类似,就是请求信息的所有内容
urlpatterns = [
url(正则表达式, views视图函数,参数,别名),
]
注意:
Django 2.0版本中的路由系统已经替换成下面的写法,但是django2.0是向下兼容1.x版本的语法的(官方文档):
from django.urls import path
urlpatterns = [
path('articles/2003/', views.special_case_2003),
path('articles/<int:year>/', views.year_archive),
path('articles/<int:year>/<int:month>/', views.month_archive),
path('articles/<int:year>/<int:month>/<slug:slug>/', views.article_detail),
]
1.2参数说明
正则表达式:一个正则表达式字符串
views视图函数:一个可调用对象,通常为一个视图函数或一个指定视图函数路径的字符串
参数:可选的要传递给视图函数的默认参数(字典形式)
别名:一个可选的name参数
二 正则表达式详解
2.1基本配置
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^articles/2003/$', views.special_case_2003), #思考:如果用户想看2004、2005、2006....等,你要写一堆的url吗,是不是在articles后面写一个正则表达式/d{4}/就行啦,网址里面输入127.0.0.1:8000/articles/1999/试一下看看
url(r'^articles/([0-9]{4})/$', views.year_archive),
url(r'^articles/([0-9]{4})/([0-9]{2})/$', views.month_archive), #思考,如果你想拿到用户输入的什么年份,并通过这个年份去数据库里面匹配对应年份的文章,你怎么办?怎么获取用户输入的年份啊,分组/(\d{4})/,一个小括号搞定
url(r'^articles/([0-9]{4})/([0-9]{2})/([0-9]+)/$', views.article_detail),
]
2.2注意事项
urlpatterns中的元素按照书写顺序从上往下逐一匹配正则表达式,一旦匹配成功则不再继续。
若要从URL中捕获一个值,只需要在它周围放置一对圆括号(分组匹配)。
不需要添加一个前导的反斜杠(也就是写在正则最前面的那个/),因为每个URL 都有。例如,应该是^articles 而不是 ^/articles。
每个正则表达式前面的'r' 是可选的但是建议加上。
^articles& 以什么结尾,以什么开头,严格限制路径
2.3补充说明
# 是否开启URL访问地址后面不为/跳转至带有/的路径的配置项
APPEND_SLASH=True
Django settings.py配置文件中默认没有 APPEND_SLASH 这个参数,但 Django 默认这个参数为 APPEND_SLASH = True。 其作用就是自动在网址结尾加'/'。其效果就是:我们定义了urls.py:
from django.conf.urls import url
from app01 import views
urlpatterns = [
url(r'^blog/$', views.blog),
]
如果在settings.py中设置了 APPEND_SLASH=False,此时我们再请求 http://www.example.com/blog 时就会提示找不到页面。
三 分组命名匹配
3.1 代码实现
下面是以上URLconf 使用命名组的重写:
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^articles/(?P<year>[0-9]{4})/$', views.year_archive),
#某年的,(?P<year>[0-9]{4})这是命名参数
#函数year_archive(request,year),形参名称必须是year这个名字。而
#如果你这个正则后面没有写$符号,即便是输入了月份路径,也会被它拦截下拉,因为它的正则也能匹配上
url(r'^articles/(?P<year>[0-9]{4})/(?P<month>[0-9]{2})/(?P<day>[0-9]{2})/$', views.article_detail), #某年某月某日的
]
例如,针对url /articles/2017/12/相当于按以下方式调用视图函数:
views.month_archive(request, year="2017", month="12"),year和month的位置可以换,没所谓了,因为是按照名字来取数据的,还记得关键字参数吗?
3.2URLconf匹配的位置
URLconf 在请求的URL 上查找,将它当做一个普通的Python 字符串。不包括url的方法(GET和POST)、参数以及域名。
例如,http://www.example.com/myapp/ 请求中,URLconf 将查找myapp/。
在http://www.example.com/myapp/?page=3 请求中,URLconf 仍将查找myapp/。
3.3捕获的参数永远都是字符串
每个在URLconf中捕获的参数都作为一个普通的Python字符串传递给视图,无论正则表达式使用的是什么匹配方式。例如,下面这行URLconf 中:
url(r'^articles/(?P<year>[0-9]{4})/$', views.year_archive)
#传递到视图函数views.year_archive() 中的year 参数永远是一个字符串类型。
3.4视图函数中指定默认值
# urls.py中
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^blog/$', views.page),
url(r'^blog/page(?P<num>[0-9]+)/$', views.page),
]
# views.py中,可以为num指定默认值
def page(request, num="1"):
pass
#两个URL模式指向相同的view - views.page - 但是第一个模式并没有从URL中捕获任何东西
#如果第一个模式匹配上了,page()函数将使用其默认参数num=“1”,如果第二个模式匹配,page()将使用正则表达式捕获到的num值
3.5include其他的URLconfs
1、项目有一个url文档,每一个app有一个对应的urls文档,实现路由分发
from django.conf.urls import include, url
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^blog/', include('blog.urls')), # 可以包含其他的URLconfs文件
url(r'^app01/',include('app01.urls')),
#别忘了要去app01这个应用下创建一个urls.py的文件,
#凡是以app01开头的路径请求,都让它去找app01下的urls文件中去找对应的视图函数,
#此时这个文件里面的那个app01路径不能用$结尾,因为如果写了$,就没办法比配上app01/后面的路径了
]
2、app01的urls.py的内容:
from django.conf.urls import url
#from django.contrib import admin
from app01 import views
urlpatterns = [
# url(r'^admin/', admin.site.urls),
url(r'^articles/2003/', views.special_case_2003,{'foo':'xxxxx'}),
url(r'^articles/(\d{4})/(\d{2})/', views.year_archive),
]
3、执行过程
1、浏览器发送请求 http://127.0.0.1:8000/app01/articles/2003/
2、定位到项目里面的urls.py
3、定位到app01/
4、拿着articles/2003/去app01里面的urls.py文件里面进行匹配
5、执行对应的函数
3.6根路径的写法以及注册app
1、根路径的匹配写法
url(r'^$', views.index),#以空开头,还要以空结尾,写在项目的urls.py文件里面就是项目的首页,写在应用文件夹里面的urls.py文件中,那就是app01的首页
2、创建appy应用的的时候记得到配置文件里面进行注册
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rbac.apps.RbacConfig', 'app01.apps.App01Config', #两种写法 'app02.apps.App02Config', #"app02"]
3.7传递额外的参数给视图函数
URLconfs 具有一个钩子,让你传递一个Python 字典作为额外的参数传递给视图函数。
django.conf.urls.url() 函数可以接收一个可选的第三个参数,它是一个字典,表示想要传递给视图函数的额外关键字参数。
from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^blog/(?P<year>[0-9]{4})/$', views.year_archive, {'foo': 'bar'}),#注意,这就像一个命名分组一样,你的函数里面的必须有一个形参,形参必须叫做foo才行。
]
四 命名URL(别名)和URL的反向解析
4.1 路由命名
url(r'^home', views.home, name='home'),
# 给我的url匹配模式起名(别名)为 home,别名不需要改,路径你就可以随便改了,别的地方使用这个路径,就用别名来搞
url(r'^articles/([0-9]{4})/$', views.year_archive, name='news-year-archive'),
# 给我的url匹配模式起名为index
4.2反向解析
1、HTML文件的写法
{% url 'home' %} #模板渲染的时候,被django解析成了这个名字对应的那个url,这个过程叫做反向解析
<a href="{% url 'news-year-archive' yearvar %}">{{ yearvar }} Archive</a>
2、views函数中的写法
from django.urls import reverse
#reverse("index", args=("2018", ))
return redirect(reverse('news-year-archive', args=(year,)))
#或者直接return redirect('news-year-archive',year)
五 命名空间模式
1、project中的urls.py
from django.conf.urls import url, include
urlpatterns = [
url(r'^app01/', include('app01.urls', namespace='app01')),
url(r'^app02/', include('app02.urls', namespace='app02')),
]
2、app01中的urls.py
from django.conf.urls import url
from app01 import views
app_name = 'app01'
urlpatterns = [
url(r'^(?P<pk>\d+)/$', views.detail, name='detail')
]
3、app02中的urls.py
from django.conf.urls import url
from app02 import views
app_name = 'app02'
urlpatterns = [
url(r'^(?P<pk>\d+)/$', views.detail, name='detail')
]
4、模板中使用:
{% url 'app01:detail' pk=12 pp=99 %}
5、views中的函数中使用
v = reverse('app01:detail', kwargs={'pk':11})
|
Environment
Red Hat Enterprise Linux (RHEL)
Issue
Performing yum updateresults in the errorNo module named yum
While running sosreportfails withNo module named os
When running subscription-managercommands fail withNo module named version
Resolution
There are multiple possible causes for this issue. Usually this issue is a result of the python path being incorrectly set. The python path sys.pathis dynamically built during python initialization using several methods, so depending on the system in question the fix may be one of the following:
Resolution 1
* Unset the PYTHONHOME variable:
# unset PYTHONHOME
For permanent change remove entry from root's
.bashrcor.bash_profileif present.
Reinstall python package by running following command:
# rpm -Uvh --replacefiles --replacepkgs python-<version>.rpm
Resolution 2
* If there is no PYTHONHOME variable set, check to ensure there is no third-party python located underneath the improper /lib/ location instead of /lib64/.
* Check the python path as well as ldd to see which files are being loaded:
# python -c "import sys; print(sys.path)"
['', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages']
# ldd /usr/bin/python
linux-vdso.so.1 => (0x00007ffd46b3b000)
libpython2.7.so.1.0 => /lib64/libpython2.7.so.1.0 (0x00007efe38aaf000) <-----take note of this file location
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007efe38893000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007efe3868f000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007efe3848c000)
libm.so.6 => /lib64/libm.so.6 (0x00007efe3818a000)
libc.so.6 => /lib64/libc.so.6 (0x00007efe37dbc000)
/lib64/ld-linux-x86-64.so.2 (0x00007efe38e7b000)
The above output indicates what is expected. If you see /usr/lib/python2.7being loaded instead of/usr/lib64/python2.7you should check the following:
# ls -l /lib/libpython2.7.so.1.0
# rpm -qf /lib/libpython2.7.so.1.0
If you find that there is a file at /lib/libpython2.7.so.1.0 and it is not owned by any package, you should move that file aside and see if the issue remains:
# mv /lib/libpython2.7.so.1.0 /tmp/
Root Cause
PYTHONHOMEvariable was set as environment variable on system.
Python libraries/files are modified which can be observed from output of rpm -Vacommand.
Third party python modules are installed on system which found in output of ldd /usr/bin/pythoncommand.
The package rpm-python* is not installed.
There is a third-party /lib/libpython2.7.so.1.0 that is being loaded instead of the proper system location of /lib64/libpython2.7.so.1.0. This causes the python module search path defined by sys.pathto be incorrectly set, resulting in the system not finding the installed python modules
Diagnostic Steps
# sosreport
'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/sbin/sosreport", line 29, in ?
import os
ImportError: No module named os
# yum -d10 update
'import site' failed; use -v for traceback
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:
No module named yum
Please install a package which provides this module, or
verify that the module is installed correctly.
It's possible that the above module doesn't match the
current version of Python, which is:
2.4.3 (#1, Jul 16 2009, 06:20:46)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)]
If you cannot solve this problem yourself, please go to
the yum faq at:
http://wiki.linux.duke.edu/YumFaq
Collect the following details from system
# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}-%{ARCH}\n" | sort > /tmp/rpm-list
# ldd /usr/bin/python
Strace command output
# strace -fxvto /tmp/strace.out yum update
# which python
# env > /tmp/env.out
# rpm -qf `which yum`
# rpm -Va > /tmp/rpm_va
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form. |
什么是Python 2?
Python 2使代码开发过程比早期版本更容易。它实现了Python增强建议(PEP)的技术细节。Python 2.7(2.x中的最新版本)已不再开发,并将于2020年停产。
在本教程中,您将学习,什么是Python 2?什么是Python 3?为什么要学习Python 2?为什么要使用Python 3?
Python 2的历史
Python 3的历史
Python 2和Python 3之间的主要区别
Python 2 vs.Python 3示例代码
使用哪个Python版本?
什么是Python 3?
在2008年12月,Python发布了3.0版。此版本的发布主要是为了解决Python 2中存在的问题。这些更改的性质是,Python 3与Python 2不兼容。它向后不兼容 Python 3的某些功能已被反向移植到Python 2.x版本中,以实现在Python 3中轻松进行迁移过程。
因此,对于使用Python 2.x版本的任何组织,将其项目迁移到3.x都需要进行大量更改。这些更改不仅涉及项目和应用程序,还涉及构成Python生态系统一部分的所有库。
为什么要学习Python 2?
虽然Python 2是旧的开源版本,但是您仍然需要在这里学习Python 2:
要成为一名DevOps工程师,您需要使用puppet或ansible等配置管理工具。在这里,您需要同时使用这两个版本;
如果您公司的代码使用Python 2编写,则需要学习如何使用该代码;
如果您的开发团队正在开发一个依赖于特定第三方库或软件的项目,而您不能移植到Python 3,那么Python 2是您唯一可用的选择;
为什么要使用Python 3?
Python 3支持AI,机器学习和数据科学等现代技术;
大型Python开发人员社区支持Python 3获得支持很容易;
Python 2的历史
Python 3的历史
Python 2和Python 3之间的主要区别
Python 2 vs.Python 3示例代码
1def main():
2 print("Hello World!")
3
4if __name__== "__main__":
5 main()
Python 2:
1def main():
2 print "Hello World!"
3
4if __name__== "__main__":
5 main()
使用哪个Python版本?
就当今的Python版本2与版本3而言,Python 3绝对是赢家。这是因为Python 2将在2020年之后不可用。大规模采用Python 3是未来的明确方向。
考虑到对Python 2编程语言的支持下降以及从升级到Python 3带来更多好处之后,对于新开发人员来说,建议始终选择Python版本3。
python培训班:http://www.baizhiedu.com/python2019 |
Criba de Eratóstenes – Algoritmos antiguos
En este artículo se describirá el funcionamiento de la criba de Eratóstenes y su aplicación en varios lenguajes.
La Criba de Eratóstenes es un algoritmo que permite encontrar los números primos dentro de una serie de números naturales. Fue diseñado por Eratótenes, un matemático, geógrafo, poeta, astrónomo y músico griego que vivió en el siglo II AC y llegó a convertirse en el encargado de la biblioteca de Alejandría.
Aunque hoy en día no tiene mucha utilidad como herramienta para encontrar nuevos números primos si es utilizado para comparar la velocidad entre diferentes lenguajes.
Algoritmo de Criba de Eratóstenes
1- Obtener una lista de números enteros consecutivos comenzando por dos hasta la cantidad de números que quieran evaluarse
2- Inicialmente hacer p=2 el número primo más pequeño
3- Enumerar los múltiplos de p desde 2p hasta el final de la lista y marcarlos en la lista
4- Encontrar el siguiente numero mas pequeño que sea mayor que p y que no esté marcado. Si no existe tal número el algoritmo termina. Si existe entones p=nuevo numero y un número primo.
5-Repetir desde el paso 3
Pseudocódigo con Pseint
//Criba de Eratóstenes
Proceso CribaEratostenes
//Obtener las lista de números a evaluar
limite=16
Dimension numeros[limite]
//Obtener las lista de números a evaluar
Para i=2 Hasta limite Con Paso 1 Hacer
numeros[i]=Verdadero;
FinPara
//Hacer 2 el primer número primo
numeros[2]=Verdadero;
//Recorrer los números y para cada uno
Para n=2 Hasta limite Con Paso 1 Hacer
//Si es primo recorrer los múltiplos y marcarlos como no primo
Si numeros[n]==Verdadero
Para i=n*n Hasta limite Con Paso n Hacer
numeros[i] = Falso;
FinPara
FinSi
FinPara
//Muestro la lista de los primos
Escribir "Primos"
Para n=2 Hasta limite Con Paso 1 Hacer
Si numeros[n]==Verdadero
Escribir n
FinSi
FinPara
FinProceso
Algoritmo para PHP
<?php
//Criba de Eratóstenes
//Obtener las lista de números a evaluar
$limite=16;
for($i=2;$i<$limite;$i++)
{
$numeros[$i]=true;
}
//Hacer 2 el primer número primo
$numeros[2]=true;
//Recorrer los números y para cada uno
for ($n=2;$n<$limite;$n++)
{
//Si es primo recorrer los múltiplos y marcarlos como no primo
if ($numeros[$n])
{
for ($i=$n*$n;$i<$limite;$i+=$n)
{
$numeros[$i] = false;
}
}
}
//Muestro la lista de los primos
echo "Primos: ";
for ($n = 2; $n < $limite; $n++)
{
if ($numeros[$n])
{
echo $n." ";
}
}
Algoritmo para C
#include<stdio.h>
#define LIMITE 16
//Criba de Eratóstenes
int main(int argc, char** argv){
int i,j,n;
int numeros[LIMITE];
//Obtener las lista de números a evaluar
for(i=2;i<LIMITE;i++){
numeros[i]=1;
}
//Hacer 2 el primer número primo
numeros[2]=1;
//Recorrer los números y para cada uno
for (n=2;n<LIMITE;n++){
//Si es primo recorrer los múltiplos y marcarlos como no primo
if (numeros[n]){
for (i=n*n;i<LIMITE;i+=n){
numeros[i] = 0;
}
}
}
//Muestro la lista de los primos
printf("Primos: ");
for (n = 2; n < LIMITE; n++){
if (numeros[n]){
printf("%d ",n);
}
}
return 0;
}
Algortimo para Python
#Criba de Eratóstenes
#Obtener las lista de números a evaluar
limite = 16
primos = []
numeros= []
for i in range(1,limite+1):
numeros.append(True)
#Recorrer los números y para cada uno
for n in range(2, limite):
#Si es primo recorrer los múltiplos y marcarlos como no primo
if numeros[n]:
for i in range(n*n,limite,n):
numeros[i] = False
#Mostrar la lista de los primos
print("Primos: ")
for n in range(2, limite):
if numeros[n]:
print(str(n)+" ")
Este código es solamente una de las aplicaciones posibles de este algoritmo, también puede resolverse sin utilizar arreglos o teniendo una lista separada donde agregar los números primos encontrados.
Espero que les sirva para practicar en otros lenguajes de programación! |
I'm trying to do the Login for my Django (2.0) website, so far I've got the login working for existing accounts. I'm using the built-in login function.
Now I want to display an error message when you enter an invalid account, for example "Invalid username or password!". But I have no idea how to go about this.
Right now it just refreshes the login page when your enter an invalid account. Any help is appreciated!
Login.html
{% block title %}Login{% endblock %}
{% block content %}
<h2>Login</h2>
<form method="post">
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Login</button>
</form>
{% endblock %}
def login(request):
if request.method == 'POST':
form = AuthenticationForm(request.POST)
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
auth_login(request, user)
return redirect('index')
else:
form = AuthenticationForm()
return render(request, 'todo/login.html', {'form': form})
in your template
{% for message in messages %}
<div class="alert alert-success">
<a class="close" href="#" data-dismiss="alert">×</a>
{{ message }}
</div>
{% endfor %}
in view
from django.contrib import messages
def login(request):
if request.method == 'POST':
form = AuthenticationForm(request.POST)
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
auth_login(request, user)
return redirect('index')
else:
messages.error(request,'username or password not correct')
return redirect('login')
else:
form = AuthenticationForm()
return render(request, 'todo/login.html', {'form': form})
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.