Search Results

Search found 10494 results on 420 pages for 'beyond the documentation'.

Page 319/420 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Samba and Windows 7

    - by John Gaughan
    I built a new computer with the intention of it being primarily a home file server. Here is my setup: one desktop with Windows 7 64 HP one laptop with Windows 7 64 HP one desktop with Kubuntu 11.10 (server) The two desktops use static IPs, and I have hostnames mapped in the HOSTS files on all three systems. I have the same username/password combo on all three systems. I have been trying for a while now to set up Samba so the Windows 7 systems can see and use it. Even if I can get the server to show up, Windows is unable to log in. One of the first things I did was to enable LMv2 authentication, which this version of Samba (3.5.11) supports. The workgroup is set correctly. I can normally see the server, but cannot authenticate. Windows homegroup is turned off. Pinging between machines works fine, and the two Windows 7 systems work together flawlessly. What I am trying to do is set up Samba to use peer to peer networking using NTLM security and user-mode authentication. According to the documentation this is possible, but there are no examples that I could find. In all the googling I have done, I see a lot of people asking how to set this up but it either works for someone else and not for me (no idea what I'm missing), or it doesn't work. Has anyone gotten this to work? Is there a place I could download a smb.conf that is set up to work in this environment?

    Read the article

  • Delay NTP Initialisation, Cisco 877W, IOS 12.4(24)T1

    - by Mike Insch
    I have a Cisco 877W which I'm using for my home ADSL connection (and as a refresher in Cisco IOS). I've got a working config in-place with my PPPoA connection coming online correctly, and VLANs and other settings configured as I want them, but I can't crack the NTP configuration. For NTP, I have the following defined ntp server 0.uk.pool.ntp.org source Dialer0 ntp server 1.uk.pool.ntp.org source Dialer0 ntp server 2.uk.pool.ntp.org source Dialer0 ntp server 3.uk.pool.ntp.org source Dialer0 This setup works fine when issued in Global Configuration Mode when the Dialer0 interface (ATM0.1) is up. The configuration fails at startup though: Translating "1.uk.pool.ntp.org"...domain server (208.67.222.222) (208.67.220.220) ntp server 1.uk.pool.ntp.org source Dialer0 ^ % Invalid input detected at "^" marker. This is repeated for the other servers defined. Obviously the DNS lookup for the server(s) fails because the DNS servers cannot be accessed because the external interface is not yet online. Is there a way to delay the NTP configuration until afte the Dialer0 interface is fully initialised? Can the NTP commands be triggered by the Line Protocol on the Dialer0 interface transitioning to the up state? Alternatively, can the NTP commands be delayed for 5 minutes after the router has finished initialising? Any advice, or pointers to useful documentation or examples gratefully received ...

    Read the article

  • How to install subversion on 1&1 server with windows?

    - by Miles M.
    I would like to start using Unfuddle for my project on 1&1 server. I never used subversion and core control before. So, I read a lot of documentation about it but each time, I get lost at the very beginning : I've downloaded the latest version of subversion. But on every tutorial, the way to follow is different. First I sae, on a lot of tuts, that you have to enter command lines. Is that ONLY for Linux ? Like here : http://chwalisz.org/2007/08/05/subversion-on-11-shared-hosting/ I also find something completely different on some website, I think (correct me if I'm wrong) it is the Windows tuts, deeply different frm the linu one. So I found that : http://www.codinghorror.com/blog/2008/04/setting-up-subversion-on-windows.html http://geekswithblogs.net/emanish/archive/2006/06/14/81905.aspx http://better-scm.shlomifish.org/subversion/Svn-Win32-Inst-Guide.html And I don t understand : Do I still have to put the sibversion file on the server ? Do I have to install Apach ? where, on my computer or on my server ? I'm working ith WampServer so I thing I have already Apach installed right ? When they say it is for Windows, do they mean it is for windows servers or for your own OS ? 'Cause my servers are on linux. How could I install Subversion on a 1&1 linux server from my W7 OS computer ? Thanks, that's a lot of question but that realle messy in my mind, I can't find something clear ..

    Read the article

  • Zabbix Proxy not collecting data

    - by syntaxcollector
    Hi All I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Apache Logs - Not Showing Requested URL or User IP

    - by iarfhlaith
    Hey all, I'm having a problem with a server that keeps falling over. Looking through the Apache error logs it appears to come from a rogue PHP script. I'm trying to track this down using Apache's error_log and access_log but the server log format isn't giving me the detail I need. I suspect the log format isn't sufficient, but I've reviewed the Apache documentation and I've included the switches that I think I need to see. Here's my LogFormat configuration in the httpd.conf file: `LogFormat "%h %l %u %t \"%r\" %s %b %U %q %T \"%{Referer}i\" \"%{User-Agent}i\"" extended CustomLog logs/access_log extended` Using the %U %q %T switches I expected to see the requested URL, query string, and the time it took to serve the request, but I'm not seeing any of this information when I tail the log. Here's an example: 127.0.0.1 - - [01/Jun/2010:14:12:04 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:05 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:06 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:07 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:08 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:09 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" Have a made a mistake in configuring the LogFormat or is it something else? Also, each request appears to come from the localhost. How come it's not giving me the remote user's IP address? Thanks, Iarfhlaith

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

  • KVM network bridge with two NICs

    - by Eil
    Greetings, I'm trying to set up bridged networking with KVM and am getting nowhere. There are docs and tutorials on the subject, but they all seem to conflict or don't provide enough info. I was wondering if someone can give me a high-level overview of how to get this working. I can probably work out the details myself (configuring the interfaces, adding routes, etc), I just need help on the big picture: how everything is interconnected. I have a RHEL5 server with KVM installed and running. It has two physical NICs, eth0 and eth1 in the same VLAN. I would like to use eth1 for all traffic between the guests and the rest of the network and reserve eth0 for host management, guest migrations, etc if possible. I'm not picky about which one gets the default route, although it would be nice if we could make it eth0. All of the guests will have static IPs. I would prefer that when a new guest is added, the networking configuration only needs to be set from within the guest itself. Basically, I want this: eth0: all host traffic eth1: all guest traffic Open to any other suggestions if this isn't possible or will be kludgy/difficult. Pointers to existing documentation might not be helpful since I've already been though just about everything out there. Thanks for any help.

    Read the article

  • Active Directory Restricted Group confusion

    - by pepoluan
    I am trying to implement Restricted Group policy for my company's AD infrastructure, namely standardizing the local "Administrators" group. The documentation (and various webpages) said that the "Members of this group" policy will wipe out the "Administrators" group. However, an experiment made me confused: I created 2 GPOs: GPO-A replaces the Local Administrators with a list of domain users (e.g., "Alice" and "Bob") GPO-B inserts a domain user (e.g., "Charlie" -- not part of GPO A) into the Local Administrators Experiment 1: GPO-A gets applied first (link order 2) Everything happens as expected: GPO-A cleans out Local Admins and add "Alice" & "Bob" gets added; GPO-B adds "Charlie". Experiment 2: GPO-B is applied first What happens: "Charlie" gets added to the Local Admins group (which also contains 2 local users) The local users on the PC gets deleted, and "Alice" and "Bob" gets added. Result: Local Admins contain "Alice", "Bob", and "Charlie" My confusion: In Experiment 2, I thought GPO-A will totally erase the Local Admins group, including users added by GPO-B (since GPO-A gets applied after GPO-B). As it happens, it only erase local users from the Local Admins, but keeps the domain users. So, is that the way it should be? Or am I doing something incorrectly?

    Read the article

  • security update in centos, which way is it?

    - by user119720
    Recently something have been bothered with my mind regarding my linux CentOS box.My client have been asking to set up a CentOS machine in their environment which works as server. One of their requirement is to make sure that the set up is to be as secure as possible. Mostly have been covered except the security update inside CentOS. So my question are as follows: 1.. How to apply the latest security,patches or bug fixes in CentOS? When doing some research, I've been told that we can update the security of CentOS by running yum install yum-security but after install this plug in,seems there is no output for this method.Its like this command is not working anymore. 2.. Can i update the security patches through rpm packages? I couldn't find any site that can download the security patches,enhancement or bug fixes for CentOS.But I know that CentOS have been releasing these update through their CentOS announcement here It just it lack of documentation on how to apply these update into my CentOS installation. For now the only way that I know is to run yum update I am hoping that someone can help me to clarify these matter.Thanks.

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • IPMI sdr entity 8 (memory module) only showing 3 records?

    - by thinice
    I've got two Dell PE R710's - A has a single socket and 3 DIMMs in one bank B has both sockets and 6 (2 banks @ 3 DIMMs) filled The output from "ipmitool sdr entity 8" confuses me - according to the OpenIPMI documentation these are supposed to represent DIMM slots. Output from A (1 CPU, 3 DIMMS, 1 bank.): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 27 degrees C Temp | 0Bh | ns | 8.1 | Disabled Temp | 0Ch | ucr | 8.1 | 52 degrees C Output from B (2 CPUs, 3 DIMMS in both banks, 6 total): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 26 degrees C Temp | 0Bh | ok | 8.1 | 25 degrees C Temp | 0Ch | ucr | 8.1 | 51 degrees C Now, I'm starting to think this output isn't DIMMS themselves, but maybe a sensor for each bank and something else? (Otherwise, shouldn't I see 6 readings for the one with both banks active?) The CPU's aren't near 50 deg C, so I doubt the significantly higher reading is due to proximity - Is anyone able to explain what I'm seeing? Does the output from my ipmitool sdr entity 8 -v here on pastebin seem to hint at different sensors? The sensor naming conventions are poor - seems like a dell thing. Here is output from racadm racdump

    Read the article

  • Why are FMS logs filled with 'play' event status code 408 for a failed webcast?

    - by Stu Thompson
    Recently we had a live webcast event go horribly wrong. I'm doing the technical post-mortem, with limited information. We know that the hardware encoder (a Digital Rapid Touch Stream Web HDI) was unable to send upstream at a sustained reliable high rate. What we don't know is if the encoder's connection was problematic (Zürich), or that of the streaming server (in Frankfurt). Unfortunately, I've got three different vendors all blaming each other (the CDN who runs the server, the on-site ISP and the on-site encoding team.) In the FMS log files I see a couple of interesting things: Zillions of Status Code 408 on play event entries for clients. Adobe's documentation stats that this "Stream stopped because client disconnected". ("Zillions" would be a ratio of 10 events for every individual IP address.) Several unpublish / (re)publish events per hour for the encoder I'd like to know if all those 408s could tell me with authority that the FMS server was starved for bandwidth, or that the encoding signal was starved (and hence the server was disconnecting clients.) Any clues?

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • Exchange 2010 Recovery: Mailbox not found using Restore-Mailbox

    - by user146665
    Exchange 2010 SP1 Update Rollup 5 server information store database was restored to a Recovery Database using EMC Networker successfully. The Recovery Database is in a mounted state with mailboxes listed within in it. However, when restoring the mailbox content using the following command: Restore-Mailbox –Identity MYMAILBOX –RecoveryDatabase MYRECOVERYDB –RecoveryMailbox LOSTMAILBOX –TargetFolder FOLDERFORLOSTMAILBOX Returns the following error: Mailbox "LOSTMAILBOX" doesn't exist on database "MYRECOVERYDB". + CategoryInfo : NotSpecified: (0:Int32) [Restore-Mailbox], ManagementObjectNotFoundException + FullyQualifiedErrorId : 66265C53,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox Note: I've used the correct alias name for the mailbox name; i've also tried combinations such as first name, or last name or both and so forth. Issuing a Get-MailboxStatistics -Database MYRECOVERYDB to see if the mailbox is there and it is as shown below: DisplayName ItemCount StorageLimitStatus LOSTMAILBOX 39495 MailboxDisabled Note: The StorageLimitStatus shows a strange output of MaibloxDisabled. Perhaps this may be the culprit. Going by the article's documentation I cannot complete the restore of the mailbox as I'm stuck at the restore-mailbox error that it cannot be found. Please advise & Thank you! Source of article: http://www.testlabs.se/blog/2012/07/05/exchange-2010-restore-to-recovery-database-using-emc-networker/

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain? [migrated]

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • How do I configure a new (non-OS) raid device under Windows 7?

    - by GregH
    I recently installed 3 new 1TB drives in my Windows 7 (64 bit) system. These are in addition to the 10k rpm disk that I already have running the Windows 7 OS. My intent is to create a RAID 5 volume with the 3 disks. I don't seem to have a problem configuring the bios and creating the resulting 1.9 TB RAID volume. I run in to the problem when I try booting in to Windows. I get a quick flash of a blue screen and then am prompted by windows to do a repair. It tries to repair and then reboots. This sequence lasts indefinitely. If I re-configure the bios back to non-RAID (ACHI) then windows boots fine. The strange thing is that the 1.9 TB volume I configured through the bios actually shows up in windows! Strange since the motherboard is not set up with RAID. I assume that I somehow have to install the RAID drivers from the mobo manufacturer. How do I do this? Is the reason I'm getting the blue screen a result of not having the RAID drivers installed? It's strange because I can find plenty of documentation on how to set up RAID and do a fresh install Windows on to the RAID device, but nothing on how to set up a RAID device on an already running system. Advice is appreciated.

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • IIS Manager - Connect to Another Server (Win7 to Win2008 server)

    - by Matt
    I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en, and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing.

    Read the article

  • Virtualbox - differencing disk based on different differencing disk

    - by Klinki
    I'm trying to create differencing image based on differencing image in VirtualBox 4.2.18. Official documentation says it should be possible: http://www.virtualbox.org/manual/ch05.html#diffimages Basically I want to achieve this drive hierarchy: + immutable image with Debian and all software installed +---- differencing image with specific configuration, autoreset=off, readonly +-------- differencing image with autoreset=on +---- another differencing image for different virtual machine +-------- differencing image with autoreset=on I successfully created differencing image based on differencing image, but I'm not able to connect it to virtual machine :( It always shows error: Failed to open the hard disk .... cannot register hard disk ... because hard disk with UUID ... already exists Here is screenshot of Virtual Media Manager and error dialog Virtual Media Manager Window screenshot Very strange is that the new differencing image (tempdrive.vdi) doesn't have Actual Size 0. I wasn't able to connect it, but still, it has 36KB of data on it... This is very similar to this older question: How to create a chained differencing disk of another differencing disk in Virtual Box? but suggested solution is not working anymore in VirtualBox 4.2.18, so I posted it as a new question. (Limit for posting links and screenshots is quite annoying..)

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >