Z
Z
Zaur Ashurbekov2016-12-03 21:04:37
Nginx
Zaur Ashurbekov, 2016-12-03 21:04:37

What could be the problem with unicorna in production?

Hello Habr!
I deployed the application on rails according to the method described on Habré , but the application gives out "We're sorry, but something went wrong."
Found in nginx.error.log :

[crit] 14627#0: *32 connect() to unix:/var/www/application/current/tmp/sockets/.unicorn.sock failed (2: No such file or directory) while connecting to upstream

here's what it gives out with ps aux | grep unicorn
4925  0.0  0.1   4688  2028 pts/0    S+   17:55   0:00 grep --color=auto unicorn

unicron configs

worker_processes 2
working_directory "/var/www/application/current" # available in 0.94.0+
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/var/www /application/current/tmp/sockets/.unicorn.sock", :backlog => 64
listen 8080, :tcp_nopush => true
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
# feel free to point this anywhere accessible on the filesystem
pid "/var/www/application/current/tmp/pids/unicorn.pid"
# By default, the Unicorn logger will write to stderr.
# Additionally, ome applications/frameworks log to stderr or stdout,
# so prevent them from going to /dev/null when daemonized here:
stderr_path "/var/www/application/current/log/unicorn.stderr.log"
stdout_path "/var/www/application/current/log/unicorn.stdout .log"
# combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings
# rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
# Enable this flag to have unicorn test client connections by writing the
# beginning of the HTTP headers before calling the application. This
# prevents calling the application for connections that have disconnected
# while queued. This is only guaranteed to detect clients on the same
# host unicorn runs on, and unlikely to detect disconnects even on a
# fast LAN.
check_client_connection false
before_fork do |server, worker|
# the following is highly recommended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
# The following is only recommended for memory/DB-constrained
# installations. It is not needed if your system can house
# twice as many worker_processes as you have configured.
#
# # This allows a new master process to incrementally
# # phase out the old master process with SIGTTOU to avoid a
# # thundering herd (especially in the "preload_app false" case)
# # when doing a transparent upgrade. The last worker spawned
# # will then kill off the old master process with a SIGQUIT.
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
#
# Throttle the master from forking too quickly by sleeping. Due
# to the implementation of standard Unix signal handlers, this
# helps (but does not completely) prevent identical, repeated signals
# from being lost when the receiving process is busy.
# sleep 1
end
after_fork do |server, worker|
# per-process listener ports for debugging/admin/migrations
# addr = "127.0.0.1:#{9293 + worker.nr}"
# server.listen(addr, :tries => -1, :delay => 5, : tcp_nopush => true)
# the following is *required* for Rails + "preload_app true",
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end

nginx configs

user nginx web;
pid /var/www/run/nginx.pid;
error_log /var/www/log/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # "on" if nginx worker_processes > 1
use epoll; # enable for Linux 2.6+
# use kqueue; # enable for FreeBSD, OSX
}
http {
# nginx will find this file in the config directory set at nginx build time
include mime.types;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
# fallback in case we can't determine a type
default_type application/octet-stream;
# click tracking!
access_log /var/www/log/nginx.access.log combined;
# you generally want to serve static files with nginx since neither
# Unicorn nor Rainbows! is optimized for it at the moment
sendfile on;
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
# we haven't checked to see if Rack::Deflate on the app server is
# faster or not than doing compression via nginx. It's easier
# to configure it all in one place here for static files and also
# to disable gzip for clients who don't get gzip/deflate right.
# There are other gzip settings that may be needed used to deal with
# bad clients out there, see wiki.nginx.org/NginxHttpGzipModule
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 0;
gzip_variable on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied expired no-cache no-store private auth;
gzip_comp_level 9;
gzip_types text/plain text/xml text/css
text/comma-separated-values
​​text/javascript application/x-javascript
application/atom+xml;
# this can be any application server, not just Unicorn/Rainbows!
upstream app_server {
server unix:/var/www/application/current/tmp/sockets/.unicorn.sock fail_timeout=0;
}
server {
# PageSpeed
​​pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
location ~ "\.pagespeed\.([az]\.)?[az]{2}\.[^.]{10}\.[^.]+" {
add_header "" "";
}
location ~ "^/ngx_pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }
location /ngx_pagespeed_statistics {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
location /ngx_pagespeed_global_statistics {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
pagespeed MessageBufferSize 100000;
location /ngx_pagespeed_message {
allow 127.0.0.1; allow 5.228. 169.73; deny all;
}
location /pagespeed_console {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
charset utf-8;
# enable one of the following if you're on Linux or FreeBSD
listen 80 default deferred; # for Linux
# listen 80 default accept_filter=httpready; # for FreeBSD
# If you have IPv6, you'll likely want to have two separate listeners.
# One on IPv4 only (the default), and another on IPv6 only instead
# of a single dual-stack listener. A dual-stack listener will make
# for ugly IPv4 addresses in $remote_addr (eg ":ffff:10.0.0.1"
# instead of just "10.0.0.1") and potentially trigger bugs in
# some software.
# listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
client_max_body_size 4G;
server_name_;
# ~2 seconds is often enough for most folks to parse HTML/CSS and
# retrieve needed images/icons/frames, connections are cheap in
# nginx so increasing this is generally safe...
keepalive_timeout 5;
# path for static files
root /var/www/application/current/public;
# Prefer to serve static files directly from nginx to avoid unnecessary
# data copies from the application server.
#
# try_files directive appeared in in nginx 0.7.27 and has stabilized
# over time. Older versions of nginx (eg 0.6.x) requires
# "if (!-f $request_filename)" which was less efficient:
# bogomips.org/unicorn.git/tree/examples/nginx.conf?...
try_files $uri /index.html $uri.html $uri @app;
location ~ ^/(assets)/ {
root /var/www/application/current/public;
expires max;
add_header Cache-Control public;
}
location @app {
# an HTTP header important enough to have its own Wikipedia entry:
# en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you forward HTTPS traffic to unicorn,
# this helps Rack set the proper URL scheme for doing redirects:
# proxy_set_header X-Forwarded-Proto $scheme;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll/streaming. It's also safe to set if you're using
# only serving fast clients with Unicorn + nginx, but not slow
# clients. You normally want nginx to buffer responses to slow
# clients, even with Rails 3.1 streaming because otherwise a slow
# client can become a bottleneck of Unicorn.
#
# The Rack application may also set "X-Accel-Buffering (yes|no)"
# in the response headers do disable/enable buffering on a
# per-response basis.
# proxy_buffering off;
proxy_pass http://app_server;
}
# Rails error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/application/current/public;
}
}
}

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
sim3x, 2016-12-03
@zaurius

No such file or directory
ps showed that unicorn is not running

N
Negwereth, 2017-03-31
@Negwereth

And how should it change? It seems to be critical for the structure.

A
AlexandrBirukov, 2017-03-31
@AlexandrBirukov

On flexbox, this is easy to do, you just need to decide which blocks and where should be shifted when the window is resized. good tutorial on flex

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question