Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
693 views
in Technique[技术] by (71.8m points)

linux - How to set a global nofile limit to avoid "many open files" error?

I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:

/etc/security/limits.conf
*               soft    nofile          65000
*               hard    nofile          65000

/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000

ulimit -n
//output 6500

So i think my system configure it's right.

My service is manage by supervisor, it's possible supervisor limits?

check process start by supervisor:

cat /proc/815/limits
Max open files            1024                 4096                 files 

check process manual start:

cat /proc/900/limits
Max open files            65000                 65000                 files 

The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.

May be supervisor start level is too high and system configure does not work when supervisor start?

edit:

system: ubuntu 12.04 64bit

It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.

update

Maybe the problem is:

Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Fixed this issue by setting the limits for all users in the file :

$ cat /etc/security/limits.d/custom.conf
* hard nofile 550000
* soft nofile 550000

REBOOT THE SERVER after setting the limits.

VERY IMPORTANT: The /etc/security/limits.d/ folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/ and in the file /etc/security/limits.conf.

CAUTION: Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.

Hope this saves someone some hair - as I spent too much time pulling my hair out chunk by chunk!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...