HAYASHIER.COM - Private Page
開発 メモ書き

General Tips

git

squash-and-rebase strategy

$ git log --oneline
9da651f (HEAD -> master) add test4
7eb615c add test3
5c615c9 add test2
eddb4ca add test1
a76e13c add file

対象のcommitよりひとつ前のハッシュを指定

$ git rebase -i a76e13c
[detached HEAD edac4fe] add test1 merged
 Date: Thu Feb 6 09:51:19 2020 +0900
 1 file changed, 4 insertions(+)
Successfully rebased and updated refs/heads/master.
pick eddb4ca add test1                                                                                                          
pick 5c615c9 add test2
pick 7eb615c add test3
pick 9da651f add test4

# Rebase a76e13c..9da651f onto a76e13c (4 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup <commit> = like "squash", but discard this commit's log message
# x, exec <command> = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop <commit> = remove commit
# l, label <label> = label current HEAD with a name
# t, reset <label> = reset HEAD to a label
# m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified). Use -c <commit> to reword the commit message.
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

mergeするcommitをsquashに変える。全部 squash にすると、Cannot 'squash' without a previous commitのエラー

pick eddb4ca add test1                                                                                                          
s 5c615c9 add test2
s 7eb615c add test3
s 9da651f add test4

すると以下の画面に遷移する。最初の1st commitのメッセージがmergeされて残るcommitなのでこの部分のメッセージを必要に応じて書き換える。ここでは、“add test1”を“add test1 merged”に書き換える。

# This is a combination of 4 commits.
# This is the 1st commit message:

add test1

# This is the commit message #2:

add test2

# This is the commit message #3:

add test3

# This is the commit message #4:

add test4

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date:      Thu Feb 6 09:51:19 2020 +0900
#
# interactive rebase in progress; onto a76e13c
# Last commands done (4 commands done):
#    squash 7eb615c add test3
#    squash 9da651f add test4
# No commands remaining.
# You are currently rebasing branch 'master' on 'a76e13c'.
#
# Changes to be committed:
# modified:   test
# 
$ git log --oneline        
edac4fe (HEAD -> master) add test1 merged
a76e13c add file

以下、merge。必要に応じてgit pullコマンド、brazil ws syncコマンドを実行

$ git checkout mainline
Switched to branch 'mainline'
Your branch is up to date with 'origin/mainline'.

$ git merge dev-branch
Updating bc26041..bf22fc3
Fast-forward
 rails-root/app/controllers/test_controller.rb | 6 ++++++
 1 file changed, 6 insertions(+)

$ git push

Conflict解消

git fetch origin
git checkout develop
git pull origin develop
git checkout new-branch
git merge develop
(コンフリクトが起きるので修正)
git push [-f] origin new-branch

gitでファイルの存在をなかったことにする

git filter-branch -f --index-filter 'git rm --cached --ignore-unmatch ファイル名' HEAD
git push --force origin master

Forkしたレポジトリの更新を追随

リモートリポジトリとして、オリジナルのリポジトリを upstream という名前で設定しpullしてくる。

$ git clone https://github.com/hayashier/redis.git

$ git remote add upstream git://github.com/antirez/redis.git
$ git pull upstream unstable

this exceeds GitHub’s file size limit of 100.00 MB

以下を実行して、“Ref ‘refs/heads/master’ was rewritten”の文字列が少なくともされるのを確認。

$ git filter-branch -f --tree-filter 'rm -f ./path/to/file' HEAD

Git LFSでサイズの大きなファイルはメタ情報だけをGitで管理し、ファイルの実体はリモートサーバーで一元管理

$ brew install git-lfs
$ git lfs install

git 特定コミット 差分

git show [コミットハッシュ]

タグの切り替え

$ git tag
2016-10-10
2016-10-12

$ git checkout -b temp-branch refs/tags/2016-10-10
$ git checkout master

gitのリモートのブランチの確認と削除

リモートブランチ 確認

git branch -a

リモートブランチ 削除

git branch -d delete-branch
git push origin :delete-branch

コマンドソースコード取得

$ rpm -qf /usr/sbin/sshd
openssh-server-6.6.1p1-33.66.amzn1.x86_64
$ yumdownloader --source openssh-server
$ rpm2cpio openssh-7.4p1-16.71.amzn1.src.rpm  | cpio -id
$ tar xf openssh-7.4p1.tar.gz 
$ cd openssh-7.4p1
$ sudo yum list git --show-duplicate
読み込んだプラグイン:priorities, update-motd, upgrade-helper
Repository packages-microsoft-com-prod is listed more than once in the configuration
1126 packages excluded due to repository priority protections
インストール済みパッケージ
git.x86_64                                                       2.14.6-1.62.amzn1                                                        @amzn-updates
利用可能なパッケージ
git.x86_64                                                       2.13.6-2.56.amzn1                                                        amzn-main    
git.x86_64                                                       2.14.4-2.58.amzn1                                                        amzn-updates 
git.x86_64                                                       2.14.5-1.59.amzn1                                                        amzn-updates 
git.x86_64                                                       2.14.5-1.60.amzn1                                                        amzn-updates 
git.x86_64                                                       2.14.6-1.61.amzn1                                                        amzn-updates 
git.x86_64                                                       2.14.6-1.62.amzn1                                                        amzn-updates 
$ yumdownloader --source git-2.14.6-1.62.amzn1
$ rpm -ih git-2.14.6-1.62.amzn1.src.rpm
$ tar xfJ rpmbuild/SOURCES/git-2.14.5.tar.xz 

 帯域測定

インストール(Amazon Linux AMI)

$ sudo yum-config-manager --enable epel
$ sudo yum install -y iperf

もしくは

$ sudo yum install iperf --enablerepo=epel

測定

Client側

$ iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.31.20.44 port 5001 connected with 35.161.116.140 port 48190
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec   112 MBytes   936 Mbits/sec
[  4]  1.0- 2.0 sec   114 MBytes   960 Mbits/sec
[  4]  2.0- 3.0 sec   115 MBytes   968 Mbits/sec

Server側

$ iperf -c 34.211.245.253 -i 1 -t 300
------------------------------------------------------------
Client connecting to 34.211.245.253, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 172.31.4.26 port 48294 connected with 34.211.245.253 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   111 MBytes   934 Mbits/sec
[  3]  1.0- 2.0 sec   112 MBytes   938 Mbits/sec
[  3]  2.0- 3.0 sec   113 MBytes   945 Mbits/sec
[  3]  3.0- 4.0 sec   111 MBytes   933 Mbits/sec

Reference

OpenSSL

OpenSSL で自己署名証明書作成

CN=www.example.com

openssl genrsa 2048 > server.key
openssl req -new -key server.key -subj "/C=JP/ST=Tokyo/L=Meguro/O=AWS/OU=AWS Support/CN=$CN" > server.csr
openssl x509 -days 3650 -req -signkey server.key < server.csr > server.crt

Reference

Squid

Squid で Forward Proxy 構築(デフォルトで Private IP アドレスレンジからはつながる)

Amazon Linux AMI, RHEL/CentOS 6

PORT=3128

sudo yum install -y squid
sudo chkconfig squid on
sudo sed -i "s/^http_port .*$/http_port $PORT/" /etc/squid/squid.conf
sudo service squid start

Squid で Forward Proxy 構築(BASIC 認証必須)

Amazon Linux AMI, RHEL/CentOS 6

PORT=3128
USER=user
PASSWORD=pass

sudo yum install -y squid httpd-tools
sudo chkconfig squid on
sudo sed -i "s/^http_port .*$/http_port $PORT/" /etc/squid/squid.conf
sudo sed -i "s/^\(http_access .*\)$/#\1/" /etc/squid/squid.conf
cat << END | sudo tee -a /etc/squid/squid.conf

auth_param basic program /usr/lib64/squid/basic_ncsa_auth /etc/squid/.htpasswd
auth_param basic children 5
auth_param basic realm Test Proxy
auth_param basic credentialsttl 24 hours
auth_param basic casesensitive off
acl pauth proxy_auth REQUIRED
http_access allow pauth
error_directory /usr/share/squid/errors/en
END
sudo touch /etc/squid/.htpasswd
sudo htpasswd -bd /etc/squid/.htpasswd $USER $PASSWORD
sudo service squid start

HHKB

Spring

Error

サーバに秘密鍵を置かないでデプロイ

Capistranoで Permission denied (publickey) のエラー

cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as ec2-user@52.198.214.152: git exit status: 128
git stdout: Permission denied (publickey).
fatal: Could not read from remote repository.

解決策

  1. (※) ローカルでSSH agentを立ち上げ秘密鍵の登録
$ eval "$(ssh-agent -s)" 
Agent pid XXXX
$ ssh-add ~/.ssh/id_rsa
Identity added: /Users/xxxxx/.ssh/id_rsa (/Users/xxxxx/.ssh/id_rsa)
$ ssh-add -l
2048 SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx /Users/xxxxx/.ssh/id_rsa (RSA)
  1. Capistranoでforward_agentの設定

ENV[‘DEPLOY_PROUDCTION_KEY’]は秘密鍵のパス

set :ssh_options, {
  keys: ENV['DEPLOY_PROUDCTION_KEY'],
  forward_agent: true
}
  1. (※) GitHubにpullする対象となるレポジトリに公開鍵を登録する。

  2. プロジェクトのトップページ(https://github.com/<レポジトリ名>/<プロジェクト名>)で SettingsからDeploy keysを選択。

  3. [add deploy key]を選択して、TitleにGitHub上のDeploy keysで表示される識別名(なんでもいい)を入力し、keyに公開鍵の中身(cat ~/.ssh/id_rsa.pubしたもの)を入力する。

Eclipse

エラー: メイン・クラスcom.leetcode.problem.p1.P1が見つからなかったかロードできませんでした

ProjectタブのPropertiesを選択される画面のJava Build PathのLibrariesを選択。 Modulepathを選択した状態で“Add Library…”を選択。 JRE System Libraryを選択してNextを選択後、“Installed JREs…”を選択し、“Add…”をクリック。 以下のように内容を入力し、Finishを選択。

Installed JRE Types: MacOS X VM JRE home: /Library/Java/JavaVirtualMachines/jdk-11.0.4.jdk/Contents/Home JRE name: 自分で識別するための任意の名前(例. Java SE 11)

設定したJREにチェックを入れて“Apply and Close”を選択。

“Build path entry is missing” error when trying to create a new project in Eclipse https://stackoverflow.com/questions/12252123/build-path-entry-is-missing-error-when-trying-to-create-a-new-project-in-eclip

Exception in thread “main” java.lang.UnsupportedClassVersionError: com/leetcode/problem/p1/P1 has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

ProjectタブのPropertiesを選択される画面のJava ComplilerのCompiler compliance levelにおけるレベルの不一致を表すようなエラーだが、 ProjectタブのClean…を選択で解消。

Eclipse が起動しない

以下のエラーで Eclipse が起動できない。

No Java virtual machine was found after searching the following locations: /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre/bin

~/eclipse/java-neon/Eclipse.app/Contents/Eclipse/eclipse.ini のファイル中の以下の箇所を編集

-vm
/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre/bin

編集後

-vm
/Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre/bin

バージョンは以下のように確認

$ ls /Library/Java/JavaVirtualMachines/
jdk1.8.0_171.jdk

Debian のコンテナで apt-get install -y vim すると、E: Unable to locate package vim

以下で解消

$ apt-get update

https://qiita.com/sivertigo/items/2211fc90cf13e0173cf3

Vim補完が利かない

Vim補完が利かず、Tabを押すと以下のようなエラー。

_arguments:451: _vim_files: function definition file not found

.zcompdumpというzshの補完ファイルのダンプファイルを消去してログインシェルとして入る。

rm ~/.zcompdump
exec $SHELL -l

http://qiita.com/Asuforce/items/28b287fdb933d1985e15

Topic

Docker

Docker イメージを作るコツ

docker run -it centos:6 /bin/bashのように起動して-y-fなどを活用したり不要なログを抑えるようにcurl -swget -qを利用しながら作業コマンドをメモしていきうまく行ったらDockerfileにする。

RUNのあと&&で繋いて一つにまとめるとよい。←Dockerは各操作ごとにレイヤーを作り、それらイメージの集合体のイメージを利用することになる。

失敗したら、exitしてrunからやり直すか、うまくいくところまででいったんexit&docker commitして続きからやり直すと良い。

一連のコマンドのどこで失敗したかを標準エラー出力するようにRUNからset -xにつながて記述していっても良い。

http://qiita.com/pottava/items/452bf80e334bc1fee69a http://wazanova.jp/items/1454 http://docs.docker.jp/engine/articles/dockerfile_best-practice.html

docker buildでは実行ディレクトリにあるファイルやディレクトリをすべて含める形で実行されてしまう → .dockerignoreの活用

Docker: ERROR: for app No such image: sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

ERROR: Encountered errors while bringing up the project.

docker-compose rm
Going to remove xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx
Are you sure? [yN] y
Removing xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx_xxxxxxxxxxxx ... done

DockerでIPアドレス出力

$ docker-machine ip
192.168.99.100

DockerでNode.js

fucking debian installs node as nodejs

RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10

を追加する必要がある。

https://gist.github.com/thom-nic/724a2af410e90a654e35

FROM ubuntu:14.04
RUN apt-get update -y
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs npm build-essential

RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10

RUN rm -rf /var/www/*
ADD src /var/www

WORKDIR /var/www
RUN npm install

RUN chown -R www-data:www-data /var/www
EXPOSE 3000
CMD ["node", "index.js"]

docker pull php:5.6-apache でフリーズする問題

docker pull php:5.6-apache
5.6-apache: Pulling from library/php
6a5a5368e0c2: Already exists 
de059afc6b5d: Already exists 
9b1ac0143753: Already exists 
a05dcf7da589: Already exists 
5e38500c0906: Waiting 

以下のようにして解決

docker-machine stop default
docker-machine ls
docker-machine start default

client is newer than server


Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)

docker-machineのバージョンがdockerのバージョンより古い。 以下を実行する。

bash
docker-machine upgrade default


Docker composeの設定

Dockerfile
docker-compose.yml
docker-ext - apache-config.conf
            - httpd.conf
            - php-5.6.ini
FROM php:5.6-apache

RUN curl -L -o /tmp/redis.tar.gz https://github.com/phpredis/phpredis/archive/2.2.7.tar.gz \
      && tar xfz /tmp/redis.tar.gz \
      && rm -r /tmp/redis.tar.gz \
      && mv phpredis-2.2.7 /usr/src/php/ext/redis \
      && docker-php-ext-install redis

RUN usermod -u 1000 www-data \
    && groupmod -g 1000 www-data

RUN echo 'date.timezone = Asia/Tokyo' >> /usr/local/etc/php/conf.d/99_myconf.ini
RUN echo 'error_reporting = E_ALL' >> /usr/local/etc/php/conf.d/99_myconf.ini

RUN apt-get update
RUN apt-get install -y ruby ruby-dev libmysqlclient-dev mysql-client libpng12-dev libjpeg-dev
RUN apt-get install -y libmagickwand-dev

RUN docker-php-ext-configure gd --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j $(nproc) gd
RUN pecl install imagick
RUN docker-php-ext-install pdo_mysql mbstring
RUN docker-php-ext-enable imagick

RUN a2enmod rewrite

RUN gem install ridgepole
RUN gem install activerecord-mysql-awesome

WORKDIR /var/www/micoly
USER root

EXPOSE 80

# Update the default apache site with the config we created.
ADD docker-etc/apache-config.conf /etc/apache2/sites-enabled/000-default.conf
web:
  build: .
  ports:
    - "80:80"
  volumes:
    - .:/var/www/micoly
  links:
    - db
    - redis
db:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: root
redis:
  image: redis
<VirtualHost *:80>
#ServerName www.example.com

# ServerAdmin webmaster@localhost
DocumentRoot /var/www/micoly/public

<Directory /var/www/micoly/public>
   Options FollowSymLinks
   AllowOverride All
   Require all granted
   SetEnv FUEL_ENV development
</Directory>

ErrorLog /var/log/apache2/error.log
CustomLog /var/log/apache2/access.log combined
</VirtualHost>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
ServerRoot "/etc/httpd"
Listen 80
Include conf.modules.d/*.conf

User apache
Group apache

ServerAdmin root@localhost
ServerName 127.0.0.1

ServerSignature Off
ServerTokens Prod

<Directory />
    #Options None
    AllowOverride None
    Require all denied
</Directory>

DocumentRoot "/var/www/micoly"
<Directory "/var/www/html">
    Options FollowSymLinks
    AllowOverride None
    Require all granted
</Directory>

<Directory "/var/www/micoly">
    Options FollowSymLinks
    AllowOverride All
    Require all granted
    SetEnv FUEL_ENV production
</Directory>

<IfModule dir_module>
    DirectoryIndex index.html
</IfModule>

<Files ".ht*">
    Require all denied
</Files>

ErrorLog "logs/error_log"
LogLevel warn

<IfModule log_config_module>
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common

    <IfModule logio_module>
      LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>

    CustomLog "logs/access_log" combined
</IfModule>

<IfModule alias_module>
    ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

</IfModule>

<Directory "/var/www/cgi-bin">
    AllowOverride None
    Options None
    Require all granted
</Directory>

<IfModule mime_module>
    TypesConfig /etc/mime.types

    AddType application/x-compress .Z
    AddType application/x-gzip .gz .tgz

    AddType text/html .shtml
    AddOutputFilter INCLUDES .shtml
</IfModule>

AddDefaultCharset UTF-8

<IfModule mime_magic_module>
    MIMEMagicFile conf/magic
</IfModule>

<IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTP:X-Forwarded-Proto} !https [NC]
    RewriteCond %{HTTP_USER_AGENT} !^ELB-HealthChecker
    RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
</IfModule>

EnableSendfile on

IncludeOptional conf.d/*.conf

Dockerでマウントしたファイルの権限が問題になった時

Warning: touch(): Utime failed: Permission denied

Apacheの設定とディレクトリの権限の違いなので、Dockerfileに以下を追記

RUN usermod -u 1000 www-data \
    && groupmod -g 1000 www-data

DockerでPHPモジュールのインストールが詰まった

Dockerfileに以下を追記

RUN apt-get install -y libpng12-dev libjpeg-dev
RUN docker-php-ext-configure gd --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j $(nproc) gd
RUN apt-get install -y libmagickwand-dev
RUN pecl install imagick
RUN docker-php-ext-enable imagick

Vagrant

Default pass: vagrant

default: Warning: Authentication failure. Retrying…でvagrant upしない


$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

/home/vagrantのパーミッションが不適切なので変更(sshの仕様)

vagrant ssh
cd /home
chmod 755 -R ./vagrant

参考

http://qiita.com/jshimazu/items/9db49ce64478e82d511e

iOS

pod set upができない

CocoaPodsの導入でpod set upできない時

$ rbenv exec pod setup              
Setting up CocoaPods master repo
[!] ERROR: Parsing unable to continue due to parsing error:
contained in the file located at /Users/shogo/.cocoapods/repos/master/CocoaPods-version.yml
---
min: 0.32.1
last: 1.1.0.rc.2

解決方法

YAML.load functionの問題

rbenv exec gem uninstall psych
rbenv exec gem install psych -v 2.0.0

全体 

rm -Rf  ~/.cocoapods
sudo gem install cocoapods
rbenv exec gem uninstall cocoapods
rbenv exec gem update
rbenv exec gem uninstall ruby
rbenv exec gem install cocoapods
rbenv exec gem install ruby
rbenv exec gem uninstall psych
rbenv exec gem install psych -v 2.0.0
$ rbenv exec pod setup               
Setting up CocoaPods master repo

CocoaPods 1.1.0.rc.2 is available.
To update use: `gem install cocoapods --pre`
[!] This is a test version we'd love you to try.

For more information, see https://blog.cocoapods.org and the CHANGELOG for this version at https://github.com/CocoaPods/CocoaPods/releases/tag/1.1.0.rc.2

Setup completed

Tips

JSONデータ抜き出し

$ jq -r '.prefixes[] | select(.region == "ap-northeast-1") | select(.service == "EC2") | .ip_prefix' < ip-ranges.json
13.112.0.0/14
46.51.224.0/19
52.68.0.0/15
{
"syncToken": "1494364331",
"createDate": "2017-05-09-21-12-11",
"prefixes": [
{
"ip_prefix": "13.32.0.0/15",
"region": "GLOBAL",
"service": "AMAZON"
},
{
"ip_prefix": "13.54.0.0/15",
"region": "ap-southeast-2",
"service": "AMAZON"
}
]
}

IPアドレス分析

$ ipcalc 52.95.255.48/24

Address: 52.95.255.48 00110100.01011111.11111111. 00110000 Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000 Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111 => Network: 52.95.255.0/24 00110100.01011111.11111111. 00000000 HostMin: 52.95.255.1 00110100.01011111.11111111. 00000001 HostMax: 52.95.255.254 00110100.01011111.11111111. 11111110 Broadcast: 52.95.255.255 00110100.01011111.11111111. 11111111 Hosts/Net: 254 Class A

Brewfile 自動生成

brew tap Homebrew/bundle
brew bundle dump

パブリックIP取得

curl inet-ip.info

Android どの程度@NonNull,例外をつけるか

そのクラスを使用する人がどこまで信用できるか http://techlife.cookpad.com/entry/2015/02/20/195000

スーパーユーザーから別のユーザーにログインできない

$ su - www-data
This account is currently not available.
$ usermod -s /bin/bash www-data
$ su - www-data

scp -i <秘密鍵>でNot a directoryが出るエラー

scp -i ~/.ssh/id_rsa_micoly_production ec2-user@52.192.160.17:~/schema_dump_20160826.sql .


## インフラエンジニアとしてよく使うコマンド集
http://qiita.com/sion_cojp/items/04a2aa76a1021fe77079?utm_content=buffer5c76e&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer

## historyコマンドをlessコマンドで遡る

```bash
history | less

対象の行番号を削除する

history -d 4980

秘密鍵の情報変更

$ ssh-keygen -p

vimでPHPのLintツールの導入(PSR-2)

NeoBundleのインストール

$ mkdir -p ~/.vim/bundle
$ git clone https://github.com/Shougo/neobundle.vim ~/.vim/bundle/neobundle.vim

.vimrc.を以下のように編集する

set nocompatible
filetype off
if has('vim_starting')
    set runtimepath+=~/.vim/bundle/neobundle.vim
    call neobundle#begin(expand('~/.vim/bundle/'))
    NeoBundleFetch 'Shougo/neobundle.vim'
        " */* : repos on github (git://github.com/*/*.git)
        NeoBundle 'scrooloose/syntastic'
    call neobundle#end()
endif
filetype plugin indent on
filetype indent on
syntax on

プラグインの有効化

vimのコマンドラインモードで:NeoBundleInstallを実行して有効化する。 無効化は、.vimrcで該当プラグインを削除してから:NeoBundleCleanを実行する。

PHP 設定確認

php -r 'phpinfo();' | grep upload_max_filesize

参考

http://blog.xcir.net/?p=1811

SSHでホストが変わった時の接続

ssh-keygen -R <ホスト名>

(参考)https://www.uramiraikan.net/Works/entry-1970.html

MacにNode.jsを入れる

http://arfyasu.hatenablog.com/entry/2016/01/26/212543

nvm ls-remote
        v0.1.14
        v0.1.15
            …
         v6.2.1
         v6.2.2

UbuntuでNode.jsを最新版にする(v.6.2.2)

apt updateの最新版だと

node -v
v0.10.46
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v
v6.2.2

GitHubにSSH

$ cd ~/.ssh
$ ssh-keygen -t rsa -C "your_email@example.com"
Generating public/private rsa key pair.
# Enter file in which to save the key (/Users/you/.ssh/id_rsa): github_id_rsa(ここは任意)
Enter passphrase (empty for no passphrase): (ここも任意)
# Enter same passphrase again: (ここも任意)
$ ssh-add ~/.ssh/id_rsa

ssh-addでエラーが出る

Could not open a connection to your authentication agent.

以下のコマンドを打つ。

eval `ssh-agent`

別途対処法

git remote -v
git remote set-url origin git://github.com/YourDirectory/YourProject.git

パス設定

$ vi ~/.bashrc

export PATH=/usr/local/bin:$PATH
alias sudo="sudo env PATH=$PATH"

git管理下からレポジトリ外で.gitignore管理

home直下に.gitignoreを置いて以下を記述する

.DS_Store
*~
*.swp

.gitconfigで先ほどの.gitignoreファイルをexcludesfileを指定する

[core]
    excludesfile = ~/.gitignore

iOS

iOS: Unknown error calling sqlite3_step (11: database disk image is malformed) eu

一旦アプリを消して入れ直す。

iOS: TableViewとCustomセルの実装

  1. UITableViewDelegate, UITableViewDataSource を追加。

  2. StoryBoardからドラッグしてdataSourceとdelegateの関連付けをして、viewDidLoad()に以下を追加。

homeTableView.dataSource = self
homeTableView.delegate = self

self.view.addSubview(homeTableView)

3.tableViewの各種メソッドを追加。

http://yuu.1000quu.com/use_a_custom_cell_in_swift

iOS: タブバーの実装

TabBarControllerを作って、以下を記述。

homeView = Utility.makeViewController(storyboardName: "Home", storyboardId: "main") as! HomeViewController
applicationView = ApplicationViewController()
nailView = NailViewController()
mypageView = MypageViewController()

homeView.tabBarItem = UITabBarItem(tabBarSystemItem: UITabBarSystemItem.featured, tag: 0)
applicationView.tabBarItem = UITabBarItem(tabBarSystemItem: UITabBarSystemItem.featured, tag: 1)
nailView.tabBarItem = UITabBarItem(tabBarSystemItem: UITabBarSystemItem.featured, tag: 2)
mypageView.tabBarItem = UITabBarItem(tabBarSystemItem: UITabBarSystemItem.featured, tag: 3)
        
let myTabs = Array(arrayLiteral: homeView!, applicationView!, nailView!, mypageView!)
self.setViewControllers(myTabs, animated: false)

各ViewControllerに以下のように記述。

self.tabBarItem = UITabBarItem(tabBarSystemItem: UITabBarSystemItem.featured, tag: 0)

iOS: Code Sign error : Command /usr/bin/codesign failed with exit code 1が出た時の対処

bundle format unrecognized, invalid, or unsuitableというエラーもTraceに出ており、プロジェクトにResourcesというディレクトリを作っていたことが原因。

http://stackoverflow.com/questions/29271548/code-sign-error-bundle-format-unrecognized-invalid-or-unsuitable

iOS: Bridging HeaderからObjective-Cファイルが読み込めない

Build SettingsでPackaging > Defines ModuleをYesにする。

http://takerpg.hatenablog.jp/entry/2014/12/02/223702

XCTestでUse of undeclared type 'Todo' does not have a member named 'todo'のエラーが出てたときの対処

@testable import <プロジェクト名>

の追加のし忘れかも。

http://yanamura.hatenablog.com/entry/2014/12/27/150454

Apache Spark

Exception in thread “main” java.lang.IllegalArgumentException: Field “x” does not exist.

sourceのcsvの先頭が

x,y
0.0,0.00112248

と始めないといけない。x,yの行が必要。

AWS

CodeCommitで突然403

KeyChain Accessのアプリを開き、codecommitで検索。アクセス制御でURLをマイナスで削除。

http://docs.aws.amazon.com/ja_jp/codecommit/latest/userguide/troubleshooting.html#troubleshooting-macoshttps

DynamoDBローカル インストール

$ wget http://dynamodb-local.s3-website-us-west-2.amazonaws.com/dynamodb_local_latest.tar.gz
$ tar xzvf dynamodb_local_latest.tar.gz
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
$ aws dynamodb list-tables --endpoint-url http://localhost:8000

確認

$ aws dynamodb create-table --endpoint-url http://localhost:8000 --profile private --table-name sample_table \
 --attribute-definitions AttributeName=key,AttributeType=S \
 --key-schema AttributeName=key,KeyType=HASH \
 --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/sample_table",
        "AttributeDefinitions": [
            {
                "AttributeName": "key",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 1,
            "LastIncreaseDateTime": 0.0,
            "ReadCapacityUnits": 1,
            "LastDecreaseDateTime": 0.0
        },
        "TableSizeBytes": 0,
        "TableName": "sample_table",
        "TableStatus": "ACTIVE",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "key"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1488946192.292
    }
}
$ aws dynamodb describe-table --table-name sample_table --endpoint-url http://localhost:8000 --profile private
{
    "Table": {
        "TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/sample_table",
        "AttributeDefinitions": [
            {
                "AttributeName": "key",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 1,
            "LastIncreaseDateTime": 0.0,
            "ReadCapacityUnits": 1,
            "LastDecreaseDateTime": 0.0
        },
        "TableSizeBytes": 0,
        "TableName": "sample_table",
        "TableStatus": "ACTIVE",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "key"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1488946192.292
    }
}

JSからS3に直接画像のアップロード

S3のCORSの設定で以下のように記述する。

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://sample.jp</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>http://localhost</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

利用料金をSlackに毎日通知

1.アカウント > 設定 から 請求アラートを受け取る を選択して請求額通知を有効化する。 2.SlackをHookするURLを取得する。 3.LambdaでCron設定で毎日回して以下のコードとライブラリをzipにしてアップロードする。

var aws = require('aws-sdk');
var url = require('url');
var https = require('https');
var hookUrl, slackChannel;
var cloudwatch = new aws.CloudWatch({region: 'us-east-1', endpoint: 'http://monitoring.us-east-1.amazonaws.com'});

hookUrl = 'https://hooks.slack.com/services/XXXXXXXXXXX/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
slackChannel = 'charge_infra';

var postMessage = function(message, callback) {
    var body = JSON.stringify(message);
    var options = url.parse(hookUrl);
    options.method = 'POST';
    options.headers = {
        'Content-Type': 'application/json',
        'Content-Length': Buffer.byteLength(body),
    };

    var postReq = https.request(options, function(res) {
        var chunks = [];
        res.setEncoding('utf8');
        res.on('data', function(chunk) {
            return chunks.push(chunk);
        });
        res.on('end', function() {
            var body = chunks.join('');
            if (callback) {
                callback({
                    body: body,
                    statusCode: res.statusCode,
                    statusMessage: res.statusMessage
                });
            }
        });
        return res;
    });

    postReq.write(body);
    postReq.end();
};

exports.handler = function(event, context) {
    console.log('Start checking cloudwatch billing info.');

    var startDate = new Date();
    startDate.setDate(startDate.getDate() - 1);

    var params = {
        MetricName: 'EstimatedCharges',
        Namespace: 'AWS/Billing',
        Period: 86400 * 31,
        StartTime: new Date(new Date().setDate(1)),
        EndTime: new Date(),
        Statistics: ['Maximum'],
        Dimensions: [
            {
                Name: 'Currency',
                Value: 'USD'
            }
        ]
    };
    cloudwatch.getMetricStatistics(params, function(err, data) {
        if (err) {
            console.error(err, err.stack);
            context.fail("Failed to get billing metrics.");
        } else {
            console.dir(data);
            var datapoints = data['Datapoints'];
            if (datapoints.length < 1) {
                console.error("There is no billing info.");
                context.fail("There is no billing info.");
            }

            var latestData = datapoints[datapoints.length - 1];
            console.log(latestData);

            var dateString = [startDate.getFullYear(), startDate.getMonth() + 1, startDate.getDate()].join("/");
            var statusColor = "#439FE0";
            var price = Number(latestData['Maximum']);
            if (price > 70) {
                statusColor = "danger";
            } else if (price > 50) {
                statusColor = "warning";
            } else if (price > 30) {
                statusColor = "good";
            }

            var slackMessage = {
                channel: slackChannel,
                attachments: [
                    {
                        color:statusColor,
                        text:"AWS 利用料金(現在): $" + String(price)
                    }
                ]
            };

            postMessage(slackMessage, function(response) {
                if (response.statusCode < 400) {
                    console.info('Message posted successfully');
                    context.succeed();
                } else if (response.statusCode < 500) {
                    console.error("Error posting message to Slack API: " + response.statusCode + " - " + response.statusMessage);
                    context.succeed();
                } else {
                    context.fail("Server error when processing message: " + response.statusCode + " - " + response.statusMessage);
                }
            });
        }
    });
};

参考

AWSシークレットキーをcommitするのを防止

git secrets --install /path/to/project --register-aws

Basic認証下のHealth checkが401が返る

2通りの方法 1.Health check用のURLを設定 2.PingプロトコルをTCPへ変更

  location @app {
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_redirect off;

    proxy_pass http://aokiji_svr;
  }

  location = /healthcheck.txt {
    access_log off;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://aokiji_svr/healthcheck;
  }
<Files healthcheck.txt>
    Satisfy Any
    Allow from all
</Files>

参考

Lambdaでzipでアップロードするときの注意点

ライブラリのinstallのし忘れに注意する(※アップロードするディレクトリ内に含める)

$ pip install requests -t .

(参考)http://qiita.com/Hironsan/items/0eb5578f3321c72637b4

zip化してアップロードする

$ zip -r upload.zip *

確認方法

以下のコマンドを入力する

$ python
Python 3.5.1 (v3.5.1:37a07cee5969, Dec  6 2015, 01:38:48) [MSC v.1900 32 bit (In
tel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from lambda_function import lambda_handler
>>> lambda_handler(None,None)

依存ライブラリの管理方法

$ cat requirements.txt
requests
$ pip install -r requirements.txt ( -t . )

AWS EC2でELB配下のどれか1台でのみ実行

#!/bin/bash

LOAD_BALANCER_NAME="[ロードバランサ名]"

# Get self instance id and ip address.
MY_INSTANCE=`curl -s 169.254.169.254/latest/meta-data/instance-id/`


# ロードバランサ配下の有効なインスタンスの一覧を取得
INSTANCE_LIST=`aws elb describe-instance-health \
  --load-balancer-name ${LOAD_BALANCER_NAME} \
  --region ap-northeast-1 \
  --query 'InstanceStates[?State==\`InService\`].InstanceId' | jq -r '.[]'`

IS_ACTIVE=0

for INSTANCE in ${INSTANCE_LIST}
do
    if [ ${INSTANCE} = ${MY_INSTANCE} ]; then
        IS_ACTIVE=1
        continue
    fi

    if [ ${INSTANCE} \< ${MY_INSTANCE} ]; then
        exit 1
    fi
done

if [ ${IS_ACTIVE} -eq 1 ]; then
    exit 0
else
    exit 1
fi
#!/bin/bash

check.sh
IS_RUN=$?

if [ $IS_RUN -eq 0 ]; then
    [バッチの実行]
fi

参考

http://qiita.com/idaaki/items/a356d1785a59d4150358

S3 バケット容量 確認(2通り)

$ aws cloudwatch get-metric-statistics --region us-east-1 --namespace AWS/S3 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=emotion-s3 Name=StorageType,Value=StandardStorage --statistics Average --start-time 2017-05-14T00:00:00Z --end-time 2017-05-14T23:59:59Z --period 86400

{ “Datapoints”: [ { “Timestamp”: “2017-05-14T00:00:00Z”, “Average”: 1670515.0, “Unit”: “Bytes” }], “Label”: “BucketSizeBytes” }

$ aws s3 ls --summarize --human-readable --recursive s3://emotion-s3

2017-04-27 23:17:36 5.0 KiB EmotionRegressionTriggerArtifacts/target/com.emotion.lambda-0.0.1-SNAPSHOT.jar 2017-04-16 23:47:55 67.0 KiB analysis3.csv 2017-05-02 23:26:12 93.1 KiB cloudformer.template ︙ 2017-04-18 12:34:17 0 Bytes emotion-emr/src/ 2017-04-22 21:26:16 3.5 KiB emotion-emr/src/emotion-linear-regression_2.11-1.0.jar 2017-04-18 14:56:51 3.5 KiB emotion-emr/src/spark-sample-project_2.11-1.0.jar 2017-04-22 21:12:50 44.0 KiB emotion-emr/transition.csv 2017-04-16 23:47:56 398.4 KiB snapshot.jpg 2017-04-16 23:48:00 352.5 KiB snapshot1.jpg 2017-04-18 12:30:02 1.9 KiB transition.csv

Total Objects: 184 Total Size: 1.6 MiB