Search Results

Search found 42115 results on 1685 pages for 'access management'.

Page 1174/1685 | < Previous Page | 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181  | Next Page >

  • Write permission when mounting Windows shares from Ubuntu

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,file_mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents, but when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file). What gives?

    Read the article

  • Solaris disk I/O spike... which monitoring tools do I need?

    - by bonez05
    Hi all, My Solaris 10 websever / database server disk io keeps spiking at intermittent times. Using iostat -xtc 5 the reads / sec will jump from 3.0 to 1450.0 and %busy will jump to 98% The apache access log doesn't indicate anything out of the ordinary. In other words, requests aren't any higher than usual. top doesn't generate anything useful. The cpu usage is fine with mysql using about 20% and nothing else to speak of really. What monitoring tool should I be using to see what process is using excessive disk I/O? or if there are any other suggestions I'm all ears. Thanks

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Remote kill, upload, execute file

    - by Masoud M.
    I'm developing a program and I need to upload my xyz.exe file to many host machines and execute them frequently. I need a server-client tool to do it as below steps after an update signal from my PC: Those host machine should kill any running processes with name xyz.exe. Download my new xyz.exe. Then execute new xyz.exe. I know about some tools like PsExec, but I need a tools with better user-interface and more powerful. Is there any tool to do it ? UPDATE: The systems are in a same LAN, OS is windows (XP or 7), No full remote access is needed. I'm a developer and my program should run in remote hosts and I'm testing my application.

    Read the article

  • Routing different domains on a VPS

    - by Hans Wassink
    We just went from shared hosting to a VPS server. We have several domain names that we have pointing to our dns, but they all point to the root of the server. What I would like now is a setup where every domain name gets its own map so we can run different sites on the VPS server. Like: www.example.com points to: /var/www/example.com www.imapwnu.com points to: /var/www/imapwnu.com First of all, is this possible? Second, I have root SSH access and Webmin, on a LAMP server running on Ubuntu. Webmin doesnt have Bind9 (I dont know if I need that, some forums pointed me towards something called bind). Thanks in advance

    Read the article

  • Exchange mail users cannot send to certain lists

    - by blsub6
    First of all, everyone's on Exchange 2010 using OWA I have a dynamic distribution list that contains all users in my domain called 'staff'. I can send to this list, other people can send to this list, but I have one user that cannot send to this list. Sending to this list gives the user an email back with the error: Delivery has failed to these recipients or groups: Staff The e-mail address you entered couldn't be found. Please check the recipient's e-mail address and try to resend the message. If the problem continues, please contact your helpdesk. and then a bunch of diagnostic information that I don't want to paste here because I don't want to have to censor all of the sensitive information contained (lazy) Can you guys throw me some possible reasons why this would happen? If there are an innumerable number of reasons, where should I start to troubleshoot this? EDIT One Exchange server inside the network that acts as a transport server, client access server and mailbox server and one Edge Transport server in the DMZ.

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Cannot connect to internet by Internet Explorer

    - by user428368
    I was using Mozilla Firefox for browsing and it workes Ok. But now when I want to open any web site by using IE8 it says Internet Explorer cannot display the webpage (and Mozilla still works). I came to this problem because I wanted to install Google Earth, but it says that the program cannot access the server .... so Google sugested me to open one of three links in IE .... and it doesn't work. By the way I'm connected to Internet via LAN, but in IE, Internet Options, Connections there is nothing, not a single conection. People, please help....

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • other computer in the network cannot connect to mysql database

    - by user23950
    I have a vb.net program that uses mysql as its database. And it works when the computer has wampservr installed. But the program gets an unhandled exception error when the computer where its running does not have a wampserver. The only thing that is installed in it is the mysql connector net. How do I make it work. I just want the two programs to access the same mysql database. I already opened port 20 by configuring firewall. Both in TCP and UDP. What do I do? Do I have to tweak the codes? Anyone in here who have tried this before?

    Read the article

  • other computer in the network cannot connect to mysql database

    - by user28233
    I have a vb.net program that uses mysql as its database. And it works when the computer has wampservr installed. But the program gets an unhandled exception error when the computer where its running does not have a wampserver. The only thing that is installed in it is the mysql connector net. How do I make it work. I just want the two programs to access the same mysql database. I already opened port 20 by configuring firewall. Both in TCP and UDP. What do I do? Do I have to tweak the codes? Anyone in here who have tried this before?

    Read the article

  • How to configure sudoers to always keep LD_LIBRARY_PATH envrionment variable?

    - by Yanick Girouard
    No matter what I try, it seems that the LD_LIBRARY_PATH environment variable is not kept after I run a command with sudo. The only way I managed to have it stick, is to prefix my sudo command with LD_LIBRARY_PATH=/the/path whenever I call it from the command-line, but I would like to not have to do this every time. It seems the env_keep option ignores this variable, and so does the exempt_group option. My %group currently has ALL=(ALL) NOPASSWD:ALL as its access in sudoers. I would like this specific environment variable to be kept for any command I run. How can I do this? My server is running Red Hat Enterprise Linux 5.7.

    Read the article

  • Can I change the image associated with my computer when it's sharing as a media server?

    - by animuson
    I currently have my computer setup to share its videos, music, and pictures as a media server so I can easily access all of my stuff from my PS3 and play it on my TV (since I can't connect my computer directly to my TV). However, my dad also has his laptop setup as a media server, for whatever reason, and both of them use the same "Windows Media Player" icon in the list. It's not a huge issue, but I was wondering if it was possible to somehow change what icon gets sent out by your computer to other devices when it's acting as a media server, and how?

    Read the article

  • Windows 7 Firewall configuration

    - by Will Calderwood
    I had a PC set up with a VPN. I used the Windows 7 firewall to block all NON-VPN traffic to the internet, but all LAN traffic was allowed. So, with the VPN connected I could connect to all networked machines and the internet. Without the VPN connected I could only connect to the LAN and had no internet access. Unfortunately my drive failed, and I'm setting up the machine again with a replacement drive. I can't for the life of me work out how to set up the firewall again. I can easily set it up to block all NON-VPN traffic, but can't work out how to that and still allow all LAN traffic whether the VPN is connected or not. Some pointers would be useful. Thanks.

    Read the article

  • FreeNas running on ESXi - sometimes gets very slugish.

    - by Luma
    Hello everyone, I have a ESXi server (dual quad core, 8GB of DDR3 ram, 6x 1TB WD Blacks running in RAid 5 on the PErc 6/i controller. I have a 64bit freenas VM running, on this VM I keep about 200Gigs of stuff that my windows machines access. every now and then the throughput of this VM just dies, for example right now it can't even handle streaming a song and when I tried to transfer a folder the speed goes from 10-400KB/s. Might I add at this point that the ESXi box has dual gigabit network cards plugged into a good solid gigabit switch and other linux and windows VM's are just fine I have seen speeds over 90MB/s (frequently) The server still has ram left over (plenty actually) and cpu is very low (500-1000mhz) any ideas what could cause this? thanks. Luc

    Read the article

  • Sharing Internet Connection in Windows 7 is so much more frustrated than Windows XP

    - by Phuong Nguyen
    Back to the time of Windows XP, from Properties dialog of my Wireless Connection, I can enable sharing and then select LAN network from the Drop Down List and boom, I can share it with my friend. We just need a LAN cable (either cross or not-cross is OK) and his Laptop will get an auto IP to gain access to internet. But now with the new Windows 7, everything starts to suck. I cannot see the Drop Down List any more in the sharing panel and my friends Laptop cannot get an automatic IP anymore. Am I doing anything wrong over there? How can I gain back the peace I used to have with Windows XP?

    Read the article

  • Route traffic on vpn to another interface on an ASA 5510

    - by Dave
    I have a ASA 5510 that has about 60-70 vpn tunnels. I have four interfaces on the device: 1)External, 2)192.168.1.0, 3)192.168.2.0, 4) 192.168.3.0 A VPN tunnel is configured from the remote site (192.168.200.0) to the 192.168.2.0 subnet on the ASA. I have remote applications I would like the users at the remote site to be able to access which are hosted on the 192.168.3.0 subnet. I can route traffic between the subnets that are located on the ASA. Any way I can route traffic from the remote site to the 192.168.3.0?

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • Factory Reset Asus

    - by Ben
    I have an ASUS All-in-one PC (not sure what model) and I'm trying to perform a Factory Restore but nothing is working yet. I have tried pressing F8, all I had access to was "Restore from an earlier point" - Today was the earliest point; and "Restore from an Image" - which I don't have. I have tried pressing F9, F10, and F11, but all that brought me was options to Start Windows normally, Run a system diagnostic, or try other options (F8 menu). I don't have any other discs to restore from or anything, and I have found a tutorial to try and create a Partition(?) to load the restore from that. Does anyone have any other ideas?

    Read the article

  • Cheapest server to run Windows 2008 R2?

    - by chopps
    Hey Everyone, I want to build a really cheap server to use for testing etc but don't want to spend alot of dough. Any recomendations on what kind of home pc/server would work for these requirements? Any place to get refurbs at a good price? Component Requirement Processor Minimum: 1.4 GHz (x64 processor) Note: An Intel Itanium 2 processor is required for Windows Server 2008 for Itanium-Based Systems Memory Minimum: 512 MB RAM Maximum: 8 GB (Foundation) or 32 GB (Standard) or 2 TB (Enterprise, Datacenter, and Itanium-Based Systems) Disk Space Requirements Minimum: 32 GB or greater Foundation: 10 GB or greater Note: Computers with more than 16 GB of RAM will require more disk space for paging, hibernation, and dump files Display Super VGA (800 × 600) or higher resolution monitor Other DVD Drive, Keyboard and Microsoft Mouse (or compatible pointing device), Internet access (fees may apply)

    Read the article

  • CentOS/Apache killing connections

    - by fin1te
    Getting a really strange error. Basically, whenever I browse to my server (http://[ip_address] or http://[hostname]), it doesn't load, and my active SSH connection drops out. I installed CentOS 5.5, and then httpd and PHP 5.3. No other applications where installed, so I can't imagine it's something else causing it. I also reinstalled CentOS 5.5 again, completely fresh, the only thing I did to it was yum install httpd, and it still caused this issue. I've changed nothing in the config or anything else. Driving me mad, has anyone heard of this? It's really frustrating since everytime I attempt to debug this issue, I get kicked off SSH and have to log back in. Theres nothing in the Apache error logs, and nothing in the access log recording my attempt. Also, the result from uname - Linux [hostname] 2.6.35.4-rscloud #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux Thank you

    Read the article

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • Creating url from the cloudfront aws

    - by GroovyUser
    I have my js files which I have uploaded into the Amazon S3 and linked it with the Cloudfront. I got a url something like this : dxxxxxxxx.cloudfront.net But opening that url in my browser ( I'm getting an error ) : <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>xxxxxx</RequestId> <HostId> xxxxxxxxxxxxxxx </HostId> </Error> But what I want actually is to use the url and add them to my webpage. How could I do that? Thanks in advance.

    Read the article

  • Intraforest user account merge with Active Directory

    - by Neobyte
    I have a scenario where there is a root domain (RD) and two child domains (CD1 and CD2). Users have accounts on both CD1 and CD2, with identical samAccountNames, names etc, and various applications either use the CD1 or CD2 account for authentication to resources. I need to collapse CD2 into CD1, so I want to merge the accounts together. However ADMT does not allow me this option (merge options are greyed out), I think because it does not support intraforest merge of accounts (although it does not explicitly state this anywhere in the documentation). My question is - what is the easiest way for me to merge these accounts? Ultimately all I really need (I think) is for the SID of CD2\user1 to be added to the SIDHistory of CD1\user1 - is there a tool that supports this? Computer accounts and profiles are not a concern for this scenario. Group migration is unlikely to be an issue either - CD2\user1 is usually granted resource access through membership of a group on CD1.

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

< Previous Page | 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181  | Next Page >