Issue: I can establish VPN connection but ping to public IP like 8.8.8.8 is timeout
Fix: Add 2 lines below to OpenVPN server
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 10.66.77.0/24 -o eth0 -j MASQUERADE
Saturday, December 7, 2013
Monday, November 25, 2013
Opsview Error code and fixes
1. (Return code of 13 is out of bounds)
Fixes: Clear /tmp file, some of the files with root permission which nagios couldn't access
2. (Return code of 255 is out of bounds)
Most likely wrong password supplied or password with "!" for MySQL server node
Fixes: Clear /tmp file, some of the files with root permission which nagios couldn't access
2. (Return code of 255 is out of bounds)
Most likely wrong password supplied or password with "!" for MySQL server node
Friday, November 22, 2013
Install S3fs on Amazon Linux/Centos
S3f3 is FUSE-based file system backed by Amazon S3 which you can mount your S3 bucket to Linux machine.
Use cases:-
- Turn your backup folder to unlimited storage pool
- Serve as centralized media storage location for multiple servers.
Limitation:-
- Objects size can only up to of 5GB.
- You can't update part of an object. If you want to update 1 byte in a 1GB object you'll have to reupload the entire file.
sudo yum groupinstall "Development Tools";
sudo yum install curl-devel libxml2-devel openssl-devel mailcap
cd ~;
wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.3/fuse-2.9.3.tar.gz;
tar -xzvf fuse-2.9.3.tar.gz;
cd fuse-2.9.3;
./configure --prefix=/usr;
make;
sudo make install;
sudo ldconfig;
export PKG_CONFIG_PATH=/usr/lib/pkgconfig;
#Verify version
#pkg-config --modversion fuse
cd~;
wget http://s3fs.googlecode.com/files/s3fs-1.73.tar.gz;
tar -xzvf s3fs-1.73.tar.gz;
cd s3fs-1.73;
./configure --prefix=/usr;
make;
sudo make install;
vi /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs
mount s3fs [bucketname] ~/s3bucket
umount: fusermount -u ~/s3bucket
mount on boot
s3fs#s3bucket /mnt/s3bucket fuse allow_other,use_cache=/tmp 0 0
Sources: http://kisdigital.wordpress.com/2011/08/04/installing-s3fs-on-rhelcentos/
http://www.idevelopment.info/data/AWS/AWS_Tips/AWS_Management/AWS_20.shtml
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer
S3fs Home
Fuse Home
Saturday, August 31, 2013
Magento - Shipping Configuration error
I was setting UPS XML integration over the last 2 weeks and keep banging the wall with this error message:-
This shipping method is currently unavailable. If you would like to ship using this shipping method, please contact us.
As I still new to Magento, It took me 2 weeks to get to the root cause, in the begining I suspect the problem was related to UPS account or access key so I keep testing around them, but turn out to be unit of measure issue.
Some of the test products, I've uom to 200 which turn out to be 200Kg which over 70KG limit after I stumble across this site with advice of turning on debug mode:-
http://www.magentocommerce.com/boards/viewthread/4283/
And the log reveal the cause:-
<RatingServiceSelectionResponse><Response><TransactionReference><CustomerContext>Rating and Service</CustomerContext><XpciVersion>1.0</XpciVersion></TransactionReference><ResponseStatusCode>0</ResponseStatusCode><ResponseStatusDescription>Failure</ResponseStatusDescription><Error><ErrorSeverity>Hard</ErrorSeverity><ErrorCode>111036</ErrorCode><ErrorDescription>The maximum per package weight for the selected service from the selected country is 70.00 kg.</ErrorDescription></Error></Response></RatingServiceSelectionResponse>
I wonder why can't the detail or specify error be shown on front-end site!
This shipping method is currently unavailable. If you would like to ship using this shipping method, please contact us.
As I still new to Magento, It took me 2 weeks to get to the root cause, in the begining I suspect the problem was related to UPS account or access key so I keep testing around them, but turn out to be unit of measure issue.
Some of the test products, I've uom to 200 which turn out to be 200Kg which over 70KG limit after I stumble across this site with advice of turning on debug mode:-
http://www.magentocommerce.com/boards/viewthread/4283/
And the log reveal the cause:-
<RatingServiceSelectionResponse><Response><TransactionReference><CustomerContext>Rating and Service</CustomerContext><XpciVersion>1.0</XpciVersion></TransactionReference><ResponseStatusCode>0</ResponseStatusCode><ResponseStatusDescription>Failure</ResponseStatusDescription><Error><ErrorSeverity>Hard</ErrorSeverity><ErrorCode>111036</ErrorCode><ErrorDescription>The maximum per package weight for the selected service from the selected country is 70.00 kg.</ErrorDescription></Error></Response></RatingServiceSelectionResponse>
I wonder why can't the detail or specify error be shown on front-end site!
Thursday, May 16, 2013
VirtualBox Installation error
If you run into the error message at the first launch from virtualbox on your Linux box with error below:-
================
Kernel driver not installed (rc=-1908)
The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing
'/etc/init.d/vboxdrv setup'
as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary.
The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing
'/etc/init.d/vboxdrv setup'
as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary.
================
[root@localhost ~]# /etc/init.d/vboxdrv setup
Stopping VirtualBox kernel modules [ OK ]
Recompiling VirtualBox kernel modules [FAILED]
(Look at /var/log/vbox-install.log to find out what went wrong)
leads you to another error:-
unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again
Solution:-
[root@localhost ~]# yum install kernel-devel kernel-headers gcc
Tuesday, April 9, 2013
AWS - S3 - Apply bucket policy for public read and office IP read and write
Here's the sample S3 Bucket policy when you have a pulic read bucket but only restrict write/update access to office network
If you enable everyone list your bucket from permission menu, everyone could grep the whole list of our bucket object by browsing your root domain url
{
"Id": "Policy1346919974114",
"Statement": [
{
"Sid": "Stmt1346917860156",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::origin-pdf.domain.com/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"110.174.240.29/26",
"175.143.152.282/32"
]
}
},
"Principal": {
"AWS": [
"*"
]
}
},
{
"Sid": "Stmt1346919900506",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::origin-pdf.domain.com/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
Saturday, April 6, 2013
DevOps Toolbox - Mount New volume to EC2
This should be a piece of coke for a most sysadmin, but if you only do it once in a blue moon(or start aging like me), you would probably spend 30mins googling. So I decided to write it down.
==
1. Attach the new EBS volume to your instance from console
2. Login into your instance on the command line and do and run
(# represents the command prompt):
# ls /dev
You should see that /dev/sdf has been created for you
# ls /dev
You should see that /dev/sdf has been created for you
3. Format /dev/sdf by running:
# mkfs.ext3 or mkfs.ext4 /dev/sdf
It will warn you that this an entire device. You should type y to allow the process to continue unless you want to create specific partitions on this device
# mkfs.ext3 or mkfs.ext4 /dev/sdf
It will warn you that this an entire device. You should type y to allow the process to continue unless you want to create specific partitions on this device
4. Create a directory to mount your new drive as on the
filesystem, for example we’ll use /var:
# mkdir /var (first mv var to var.bk)
# mkdir /var (first mv var to var.bk)
5. Add a reference in the fstab file to mount the newly
formatted drive onto the /files directory by running the following command:
# echo “/dev/sdb /files ext4 noatime 0 0″ >> /etc/fstab
# echo “/dev/sdb /files ext4 noatime 0 0″ >> /etc/fstab
6. Mount the drive by running:
# mount /var
# mount /var
7. Check your drive has mounted correctly with the expected
amount of file space by running:
# df -h /var
# df -h /var
It really is that simple, within a few cli commands you can
simply add 1GB to 1TB of storage at the drop of a hat!
Source: http://www.digitaltactics.co.uk/linux/how-to-mount-an-amazon-ebs-disk-as-a-drive-in-linux-centos/
Subscribe to:
Posts (Atom)