Search Results

Search found 69034 results on 2762 pages for 'file locking'.

Page 776/2762 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • .htaccess does not ask the password

    - by Sarp Kaya
    I am using Ubuntu 12.04 and trying to use .htaccess on a page with apache2 server on it. My .htaccess file looks like this: AuthType Basic AuthName "Password Required" AuthBasicProvider file AuthUserFile /home/janeb/.htpasswd require valid-user /home/janeb/.htpasswd file is: inb351:$apr1$Ya4tzYvr$KCs3eyK1O2/c7Q9zRcwn.. and /etc/apache2/sites-available/default file is : UserDir public_html <Directory ~ "public_html/.*"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> I restarted apache. I have tried to change require valid-user to require user inb351. Still no luck. I also tried AllowOverride with AuthConfig and AuthConfig Indexes. So I don't know what else to do, and yes every step that I have tried I restarted apache.

    Read the article

  • Downloaded chm is blocked, is there a solution?

    - by David Rutten
    CHM files that are downloaded are often tagged as potentially malicious by Windows, which effectively blocks all the html pages inside of it. There's an easy fix (just unblock the file after you download it), but I was wondering if there's a better way to provide unblocked chm files. What if I were to download the chm file (as a byte stream) from our server inside the application, then write all the data to a file on the disk. Would it still be blocked? Is there another/better way still? Edit: Yes, downloading the file using a System.Net.WebClient does solve the problem. But, is there still a better way?

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • How do I remotely enable the firewall on Server 2008 to exclude specific IP addresses?

    - by Guy
    Previously I was working with Server 2003 and managed to lock myself out of the server (I was accessing it remotely) by enabling the firewall. I want to remotely enable the firewall on Server 2008 without locking myself out of the server (access via RDP) and then selectively add IP addresses to the firewall to exclude. i.e. block specific IP addresses. Are there any step by step instructions on how to safely do this?

    Read the article

  • Scanning php uploads in tmp directory with clamdscan fails

    - by Nikola
    I can't seem to get this thing to work, some permission problem maybe, but i can't even run clamdscan normally form console with root the result is always Permission denied. for example i create a file test.txt (eicar file) in /tmp and execute "clandscan /tmp/test.txt" in console logged in as root and i get "/tmp/test.txt: Access denied. ERROR ". The clamd demon is running with user clamav could that be the reason? Now i want to scan the same file (/tmp/test.txt) via php , so i run (i have chowned the file to apache:apache ) $cmd="clamdscan /tmp/test.txt"; exec($cmd,$a,$b); i get error 127 i try with the full path of the command /usr/bin/clamdscan i get error 126 (command is found but is not executable), this means that apache doesn't have the permission to execute /usr/bin/clamdscan ? what could be the problem?

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • netmask: command not found

    - by Ian R.
    I purchased a new server with a few ip's so I modified the /etc/network/interfaces file recently so that my ip's can go live. While editing that file I created a backup and deleted the original file. I recreated the interfaces file using the touch command and gave +x permissions but now, when trying to restart the interface (/etc/network/interfaces restart) I get all sorts of errors: /etc/network/interfaces: line 10: iface: command not found /etc/network/interfaces: line 11: address: command not found /etc/network/interfaces: line 12: netmask: command not found /etc/network/interfaces: line 13: auto: command not found Can any1 point what I forgot to do? Thanks.

    Read the article

  • Powershell if statement not working - help!

    - by Dmart
    I have a batch file that takes a computer name as its argument and will install Java JRE on it remotely. Now, Im trying to run this Powershell script to repeatedly call the batch file and install Java on any system that it finds without the latest version. It seems to run error-free, but the statements inside the if code block never seem to run - even when the if conditional test evaluates to true. Can anyone look at this script and point out what I'm possibly missing? I'm using the Quest AD cmdlets, and BSOnPosh module. Thank you. get-qadcomputer -sizelimit 0 -name mypc* -searchroot 'OU=MyComputers,DC=MyDomain,DC=lcl'| test-host -property name |ForEach-Object -process { $targnm = $_.name $tststr=reg query "\\$targnm\HKLM\SOFTWARE\JavaSoft\Java Runtime Environment" /v Java6FamilyVersion if(-not ($tststr | select-string -SimpleMatch '1.6.0_20')) { $mssg="Updating to JRE 6u20 on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append cmd /c \\server\apps\java\installjreremote.cmd $targnm } else { $mssg ="JRE 6u20 found on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append } }

    Read the article

  • Unix apt-get doesnt download from nfs locaiton

    - by pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • Is there a command to change primary group for a new user in Cygwin?

    - by Rob Gilliam
    Is there a way to set a (new) user's primary group in Cygwin's /etc/passwd file without hand-editing the file? I have a local group set up for members of the Dev Team on a Windows Server 2008 R2 box so that we can all modify a particular group files, but the regular users can only read them. As some of the work we do uses scripts that rely on Cygwin tools, this group is also in the /etc/group file. When I need to add a new user to the "Dev Team" group, I add them in Server Manager, and then use mkpasswd to add that user to Cygwin's /etc/passwd file. Unfortunately, they get the regular Domain Users group assigned as their primary group and I then have to go in and edit the passwd file to change the group. I now need to write some instructions for someone else who is not au fait with UNIX/Linux/Cygwin so that they can set up new Dev Team users and obviously "hand editing" /etc/passwd is a recipe for disaster if you don't know what you're doing. So, is there a way of getting mkpasswd to set a different primary group, or another tool like Linux's usermod which can be used for the purpose of changing the group in a more controlled manner?

    Read the article

  • Setting Up Local Repository with TortoiseSVN in Windows

    - by Teno
    I'm trying to set up a local repository so that all commitments are copied to the local destination, not a remote server. I followed this tutorial. What I did. Created a folder named "SVN_Repo" under C:\Documents and Settings[user-name]\My Documents\ Right clicked on the folder and chose TortoiseSVN -> Create repository here Clicked OK in the pop up dialog asking whether to create a directory structure. Created a folder named Repos for the local destination, under E:\ Right clicked on the SVN_Repo folder and chose SVN Checkout... Typed file:///E:\repos in the URL of repository field and clicked the OK button. What I got: Checkout from file:///E:/repos, revision HEAD, Fully recursive, Externals included Unable to connect to a repository at URL 'file:///E:/repos' Unable to open an ra_local session to URL Unable to open repository 'file:///E:/repos' I must be doing something wrong. Could somebody point it out? Thanks.

    Read the article

  • Available filesystems for Linux that are case-insensitive?

    - by David
    I have a client whose web application was written entirely in a windows environment and served from windows. Unfortunately there's way to many cases of get "file/At/Somelocation.php" where the file is actually something horrible like "File/at/SomeLocation.PHP". I really don't want to be forced to work in Windows but it will take weeks if not longer to fix all the casing issues. Am I SOL here?

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Rar.exe CLI - add files from directory without directory structure?

    - by Vercas
    I need to grab every file in a folder and shove it into a RAR archive. This is my current method: "C:\Program Files\WinRAR\Rar.exe" a -r -md2m -s -m5 -ma4 -t ..\Releases\vCommands.rar bin\ ... Where bin is my folder. I tried this too, even though it is for another program, and the results are the same. To be clear, here's a picture: To the top-left, in the .rar file, there is the bin directory which contains all the files. To the bottom-right, in the .7z file, all those files are in the archive root. What I need is shoving all those files in the .rar archive root, instead of a folder, without having to execute my batch file inside that bin folder.

    Read the article

  • Fixing a corrupted Windows Server 2003 server

    - by Keith
    I have a Windows Server 2003 server that is being mainly used for some reporting done in SQL Server. Recently Windows has started complaining about being corrupted, we are getting an NTFS error 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume1. The server is RAID 5 and I did have a disk die however the RAID never went degraded since I have a hotspare. I replaced the hot spare and I'm still having problems. When I run chkdsk I get tons of messages.. some are: Deleting corrupted attribute record (128, "") from file record segment 194746 Those go on for a while. Then it deletes some orphan files. Then it does Correcting error in index $I30 for file 132426 And that goes on for a while. Then I get tons of Recovering orphaned file RE1AB6~1.LOG into directory file 534959 I have seen a lot of errors relating to the SQL Server reporting services. What are my options at this point? I would prefer to fix the issue instead of building a new server but I don't know if I can at this point.

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • Linux: Can't overwrite files on samba store

    - by jonescb
    I'm using CentOS 5.5 with smbclient 3.0.33-3.28-el5 (latest version in repo), and I can't overwrite files in my Samba store. I am not the admin for the Samba server, so there isn't anything I can do server side. But I do have write permission to the server. I know the server runs Windows XP or Server 2003; I don't know. I can delete the file, and then copy the new version over, but I can't overwrite it. Using the cp command I'll get this error: [jonescb@localhost ~]$ cp foo.txt /mnt/si_storage/foo.txt cp: cannot create regular file `/mnt/si_storage/foo.txt': No such file or directory` And if I edit a file on the server using vim, I can save it once, but if I save it again I get this: "/mnt/si_storage/foo.txt" E212: Can't open file for writing This is my /etc/fstab entry for the samba server: //192.168.1.2/SI_STORAGE /mnt/si_storage cifs username=myuser,password=mypass 0 0 Edit: I can overwrite files just fine on my XP machine. The CentOS box is the only one having problems.

    Read the article

  • Windows XP: How to delete files that cannot be deleted?

    - by glenneroo
    I have a backup copy of my previous Documents and Settings folder which only contains my original user and within that, 2 directories (Favorites and Local Settings) which are visible in cmd shell but when I try to delete them, Windows gives me this error: If I try to delete the Documents and Settings folder, I receive this warning: I tried doing this in a cmd shell: attrib *.* -r -a -s -h /s But it did not help, nor did it return any errors/warnings. Unlocker 1.8.5 returns: No Locking handle found. Any ideas?

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >