Search Results

Search found 75614 results on 3025 pages for 'file location'.

Page 856/3025 | < Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >

  • Fixing a corrupted Windows Server 2003 server

    - by Keith
    I have a Windows Server 2003 server that is being mainly used for some reporting done in SQL Server. Recently Windows has started complaining about being corrupted, we are getting an NTFS error 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume1. The server is RAID 5 and I did have a disk die however the RAID never went degraded since I have a hotspare. I replaced the hot spare and I'm still having problems. When I run chkdsk I get tons of messages.. some are: Deleting corrupted attribute record (128, "") from file record segment 194746 Those go on for a while. Then it deletes some orphan files. Then it does Correcting error in index $I30 for file 132426 And that goes on for a while. Then I get tons of Recovering orphaned file RE1AB6~1.LOG into directory file 534959 I have seen a lot of errors relating to the SQL Server reporting services. What are my options at this point? I would prefer to fix the issue instead of building a new server but I don't know if I can at this point.

    Read the article

  • Set nginx.conf to deny all connections except to certain files or directories

    - by Ben
    I am trying to set up Nginx so that all connections to my numeric ip are denied, with the exception of a few arbitrary directories and files. So if someone goes to my IP, they are allowed to access the index.php file, and the phpmyadmin directory for example, but should they try to access any other directories, they will be denied. This is my server block from nginx.conf: server { listen 80; server_name localhost; location / { root html; index index.html index.htm index.php; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/http/nginx/$fastcgi_script_name; include fastcgi_params; } } How would I proceed? Thanks very much!

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • What's the shortest way to post a cropped screenshot on the web?

    - by Borek
    If I want to send someone a piece of my screen this is what I currently do: PrtScr or Alt+PrtScr Open Paint.NET Paste the screen shot Make a selection Crop image to selection Save image to some location - and remember it! Go to some image hosting site (there are plenty of them in the days of Twitter) Click their "Browse" button Browse for the image if I happened to remember the location where I stored it :) Upload the image and obtain the link which I can share This is simply too many steps. I don't usually mind doing steps 1 to 5 but especially steps 6 and 9 are annoying. Jing is pretty close to what I'm looking for but I find their horrible URLs unbearable. If there was something with similar functionality but better or pluggable hosting, that would be great.

    Read the article

  • Is it possible to rsync your web site to another backup server and use the same .htaccess files?

    - by stephenmm
    I am trying to use rsync to replicate all the files from one web server to another server that could act as a backup if the first one went down. The problem I am having is that the .htaccess file requires the AuthUserFile to have the fully quallified path to the .htpasswd file and I cannot make the paths the same on the two machines. Does anyone know how I might use the same .htaccess file on two different servers? Thanks for any help that can be provided.

    Read the article

  • nginx+django serving static files

    - by avalore
    I have followed instruction for setting up django with nginx from the django wiki (https://code.djangoproject.com/wiki/DjangoAndNginx) and have nginx setup as follows (a few name changes to fit my setup). user nginx nginx; worker_processes 2; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /static/ { root /srv/static/; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov) { access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param REMOTE_ADDR $remote_addr; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } } Static files aren't being served (nginx 404). If I look in the access log it seems nginx is looking in /etc/nginx/html/static... rather than /srv/static/ as specified in the config. I've no clue why it's doing this, any help would be hugely appreciated.

    Read the article

  • How do you determine the OWNER of an Oracle Database

    - by Kwang Mark Eleven
    When you install an Oracle database in a Unix server, the Unix user id you use for the installation becomes the OWNER of the database. What is the most reliable and general way of determining in a shell script which Unix user is the owner of an Oracle installation? I mean, can you perform a grep on a file created by the installation to find this information or shall I resort to use the ls command on a specific file on a specific directory. If the name of the file to be checked is also variable, I would need to have a way of determining the name and path to the file. Thanks in advance for your time

    Read the article

  • `rsync` NEVER uses its 'famous' delta-transfer!

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I want to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I've searched for solution for 17 hours straight and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offend anyone, I am truly sorry and I apologize. Thanks in advance. And sorry for such a long post.

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • I have to manually enter the pwd when I login at school and when I come back home with my laptop (DELL E6410)

    - by Bong Paduano
    I didn't have to do this the first 3 months of me going to school. My laptop used to be able to connect to my Home WiFi and school Wifi automatically then after the 3rd month something changed and eversince then I literally have to manually enter each of the Wifi passwords in each location each time I try to connect to their respective WLAN. I tried adding each WiFi networks in the "Manage wireless networks" section in the Network and Sharing Center but everytime I login to one of the location, the created network disappears and I did this more than once. I tried restoring the OS back but to no avail. Can someone please assist me on this issue. I'm not sure if this issue has been addressed in this website but I haven't seen any similar question in here. I appreciate any help anyone may be able to share.

    Read the article

  • Programs keep waiting for external disk to spin up - how to ignore disk?

    - by Andrew J. Brehm
    Like many Mac users I have an external Firewire disk hooked up to my Mac to be used by Time Machine. This works very well, backup-wise. The problem is that very often when I use a Mac application and try to open a file, the file selection dialogue window hangs until the external disk has spun up. I never ever want to open a file on the external disk. Sometimes this happens even when I just want to save a file I already saved (i.e. type something and press meta-s). Is there anything I can do about this?

    Read the article

  • Can't access Terminal anymore, only shows a cursor

    - by user138304
    I run OS X. Following these directions (Installing MySQL on Mac OS X) I added a file to /usr and the contents were PATH=/usr/local/mysql/bin:$PATH Actually I was trying to get the mysql command to work now I cannot access terminal. All I get is a cursor but no command line. I also cannot find the file I created in the Finder. I used command shift G to find the folder /usr and the file is not there. Edit: I Solved the problem by restarting my computer. I am really not sure what the problem was. I got the idea because Could not open a new pseudo-tty. appeared in my terminal after following slhck directions to remove my .profile file. I then searched google and found this; http://blogs.oreilly.com/digitalmedia/2008/03/fixing-terminal-tty-errors.html. Thanks

    Read the article

  • Django fails to find static files served by nginx

    - by Simon
    I know this is a really noobish question but I can't find any solution despite finding the problem trivial. I have a django application deployed with gunicorn. The static files are served by the nginx server with the following url : myserver.com/static/admin/css/base.css. However, my django application keep looking for the static files at myserver.com:8001/static/admin/css/base.css and is obviously failing (404). I don't know how to fix this. Is it a django or an nginx problem ? Here is my nginx configuration file : server { server_name myserver.com; access_log off; location /static/ { alias /home/myproject/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } Thanks for the help !

    Read the article

  • How to get the summarized sizes of folders and their subfolders?

    - by Kau-Boy
    Let's say I want to get the size of each folder of a linux file system. When I use ls -la I don't really get the summarized size of the folders. If I use df I get the size of each mounted file system but that also doesn't help me. And with du I get the size of each subfolder and the summary of the whole file system. But I want to have only the summarized size of each folder within the ROOT folder of the file system. Is there any command to achiev that?

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • How can I bulk rename files in a RAR or ZIP archive on the mac?

    - by Chris R
    I have a set of archive files -- both zip and rar formats -- inside of which I need to rename some files. Specifically, I want to do something like this: for each archive file in a directory for each file in the archive if the file name matches the regular expression /(.* - [0-9]{2})([0-9]{2} - .)*/ rename the file as \1-\2 The trick isn't so much in the generation of the new name; I can do that with either bash or sed or anything else. It's the set of commands to manipulate the files in the archives using rar/unrar or unzip/zip (If it makes a difference, I'm re-formatting some CBR/CBZ files to get the double-page spreads to come up in the right order in SimpleComic -- it interprets page 0203 as page 203, which makes the story a bit hard to follow)

    Read the article

  • Need to create street maps images with route plotted from address 1 to address 2

    - by daustin777
    I'm looking for software to create street map images that show the route from address 1 to address 2. It needs to be able to create the map images from either a database file that contains the addresses, a delimited text file that lists the addresses to route in each row, or an Excel file. I need to do this to create custom maps in bulk- 500 to 20,000 quantity from the data file. The purpose is to provide a map with a route from a location (address 1) to a retail store (address 2). The maps will be printed on a postcard. I have the data. I just need the mapping software. Is there software available that can do this?

    Read the article

  • excel 2010 search function?

    - by Tom
    can a cell A1:A200 be searched for a "name" then once found, imput the cell location into a formula? such as find "tom"(a1:a200), [found location at cell a22] IF(a22),=IF(MINUTE(Auto_Agent!G27)+(SECOND(Auto_Agent!G27))=0,"",(MINUTE(Auto_Agent!G27)*60+(SECOND(Auto_Agent!G27)))) the problem I'm having is each time I import data names can be in different cell locations depending on who is working that day. example: Agent: Tom 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Agent: Bill 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05

    Read the article

  • Is there a way to control two instantiated systemd services as a single unit?

    - by rascalking
    I've got a couple python web services I'm trying to run on a Fedora 15 box. They're being run by paster, and the only difference in starting them is the config file they read. This seems like a good fit for systemd's instantiated services, but I'd like to be able to control them as a single unit. A systemd target that requires both services seems like the way to approach that. Starting the target does start both services, but stopping the target leaves them running. Here's the service file: [Unit] Description=AUI Instance on Port %i After=syslog.target [Service] WorkingDirectory=/usr/local/share/aui ExecStart=/opt/cogo/bin/paster serve --log-file=/var/log/aui/%i deploy-%i.ini Restart=always RestartSec=2 User=aui Group=aui [Install] WantedBy=multi-user.target And here's the target file: [Unit] Description=AUI [email protected] [email protected] After=syslog.target [Install] WantedBy=multi-user.target Is this kind of grouping even possible with systemd?

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • nginx public webdav server

    - by Gert Cuykens
    Can you check the user group from a $remote_user? location ~ ^/home/(.*)$ { alias /home/$remote_user/$1; auth_pam "Restricted"; auth_pam_service_name "nginx"; dav_methods PUT DELETE MKCOL COPY MOVE; dav_access group:rw all:r; create_full_put_path on; } location ~ ^/get/(.*)$ { alias /home/$1; #check the group of the $remote_user; } curl -T test.txt 'http://gert:[email protected]/home/' curl 'http://friend:[email protected]/get/gert/test.txt'

    Read the article

  • How Do I Automatically Update My Database Nightly

    - by Russ
    Currently, every day before I start work, I complete the following procedure: ssh to the production server gzip our daily database dump file scp the gzipped dump file over to my computer gunzip the dump file dropdb mydatabase createdb mydatabase psql mydatabase < dump.sql Is it possible (I'm sure it is) to automate this process on Mac OSX? This way it is done by the time I get to work in the morning. If so, what is the quickest and easiest way

    Read the article

< Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >