Search Results

Search found 23627 results on 946 pages for 'alter script'.

Page 830/946 | < Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • How do I uninstall a ruby version installed via source?

    - by Aaron McIver
    I installed a version (1.9.3-p194) of ruby via source using make install and realized this may have been the wrong route to take. Upon doing this, I realized this was a mistake and I should be using a solution such as rvm to address my ruby versions within the OS. I looked to see if an uninstall existed to be ran in conjunction with make and it didn't. I then proceeded to install rvm and add the aforementioned version in to my list of managed rubies within rvm which is not listed as ext-ruby-1.9.3-p194. rvm rubies ext-ruby-1.9.3-p194 [ x86_64 ] =* ruby-1.9.3-p194 [ x86_64 ] # => - current # =* - current && default # * - default** When I perform an rvm remove, it simply removes it from the rubies list however it still exists within /usr/local/bin. I am not concerned with the system install ruby version residing in /usr/bin as I understand that is tied to the OS and should simply be ignored. How can I safely uninstall/remove the aforementioned version and all the places in which it was installed, short of looking at the install script?

    Read the article

  • Finding default gateway in an openvpn environment in windows

    - by Alexander Trümper
    I need to find the default gateway in a openvpn scenario where the route output looks like that: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.49.73.1 10.49.73.24 10 0.0.0.0 128.0.0.0 10.8.0.1 10.8.0.2 30 So I googled around a bit and a found this script here: @For /f "tokens=3" %%* in ( 'route.exe print ^|findstr "\<0.0.0.0\>"' ) Do @Set "DefaultGateway=%%*" echo %DefaultGateway% This works, but matches both lines in the route output. But I need to find this line: 0.0.0.0 0.0.0.0 10.49.73.1 10.49.73.24 10 So I tried to modify the findstr parameter like this: findstr "\<0.0.0.0\>.\<0.0.0.0\>" in the expectation that '.' will match for the tab between the columns. But it doesn't. It will still set DefaultGateway to 10.8.0.1 I couldn't find a clue in MS documentation either. Maybe someone knows the right expression? Thanks a lot.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Is there a faster way to change default apps associated with file types on OS X?

    - by Lri
    Is there anything more convenient than using RCDefaultApp or Magic Launch, or just repeatedly pressing the Change All buttons in Finder's information panels? I thought about writing a shell script that would modify the CFBundleDocumentTypes arrays in Info.plist files. But each app has multiple keys (sometimes an icon) that would need to be changed. lsregister can't be used to make specific modifications to the Launch Services database. $ `locate lsregister` -h lsregister: [OPTIONS] [ <path>... ] [ -apps <domain>[,domain]... ] [ -libs <domain>[,domain]... ] [ -all <domain>[,domain]... ] Paths are searched for applications to register with the Launch Service database. Valid domains are "system", "local", "network" and "user". Domains can also be specified using only the first letter. -kill Reset the Launch Services database before doing anything else -seed If database isn't seeded, scan default locations for applications and libraries to register -lint Print information about plist errors while registering bundles -convert Register apps found in older LS database files -lazy n Sleep for n seconds before registering/scanning -r Recursive directory scan, do not recurse into packages or invisible directories -R Recursive directory scan, descending into packages and invisible directories -f force-update registration even if mod date is unchanged -u unregister instead of register -v Display progress information -dump Display full database contents after registration -h Display this help

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • Use WSUS without client configuration

    - by sc911
    Hello *, is there any way to let client-PCs use the local WSUS-server without having to configure them? What we need is a system to update PCs before they are delivered to the users. So the WSUS-server is accessible only within our lab, not later on at the users place. We'd like to use WSUS because it will fasten up the download very much. And we don't like to modify the clients as those changes might be forgotten to remove and then at the users place no update will be possible. So the easiest way would be, if one could redirect the normal Microsoft update, but I'm pretty sure that this will not be possible as this update will not be WSUS compliant. An other option I thought of might be, that the DHCP delivers an extra option telling the clients where to get the updates. But I could not find any information about this, so it looks like that this isn't possible too. So, is there any way? Or will it be easier to use a little script to change the WSUS-entries automatically? Regards sc911

    Read the article

  • What is a suitable simple, open web server for Windows?

    - by alficles
    I'm looking for a dead simple web server for Windows. Load will not be high as it will be primarily serving binaries for a WPKG update service. It needs to serve the entire contents of a single folder over HTTP on a configurable (high) port. No CGI or other scripting is required, but it might be nice for future features. I started with Mongoose, since it doesn't even have an installation requirement (a very nice perk), but it fails to start when run as a service. (Technically, it acts as it's own installer.) I've investigated LighTPD as well, but it appears to be minimally (at best) tested on Windows. And naturally, I'm looking for something free. As in beer is good, but speech is better, as always. Edit: I didn't mention this initially, but non-tech people will be doing the install. They'll have whatever script I write for the install, but the goal is a simple system that is easy to troubleshoot. (I almost worded this question "What is the best...", but Serverfault rightly observed that that is a subjective question. And it's really not an optimization problem, any suitable solution will work. I just can't seem to find one for Windows.)

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • Track kids browsing history even when they know how to clear it manually

    - by Darren Newton
    I have a colleague with two teenage boys (yes, cue cliche's about 'I have this friend see...') He's currently having issues with them browsing pr0n and wants to do a little spying on their browsing (I'm staying clear of the philosophies/ethics on this.) The kids are savvy enough to clear their browsing history when they're done. As I'm his goto for IT he has asked me if there is a way to keep a hold of the browsing history. The family uses Macs, and the kids surf with Safari. I know that browsing history is kept here ~/Library/Safari/History.plist. I figure there should be a way to write either an AppleScript or other script (Python/Ruby/Bash) that can backup this file to a different location (/opt/local/history, etc.) Since the kids know to clear their history when they're done should the file be periodically backed up with something similar to a cron job or something like Hazel? While that could work it seems like it would create a ton of little incremental backups. Or is it possible to 'watch' ~/Library/Safari/History.plist and incrementally add changes to a backup file (saving a diff so to speak) but not lose any data? Any ideas/solutions appreciated. UPDATE/EDIT: Got the word from concerned dad that the oldest uses Firefox on a different PC, so the OpenDNS solution (preferably at the router level) is the best answer so far as it would capture usage for the whole house.

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Howto setup neocomplcache?

    - by eddy
    I just started using vim and saw a cool plugin: [neocomplcache].(http://www.vim.org/scripts/script.php?script_id=2620) My Problem is, that I can't get it to work properly. After installing, I took the example config from the help files of neocomplcache and added the lines to my .vimrc At first I wanted to create a simple latex file (there are snippets for tex). After typing "begi" there appears a menu, I can choose between the snippets with TAB or <C-n>. But how do I get them to expand? <C-k> does not work, but I don't understand why. ======== .vimrc: ======== .... " Plugin key-mappings. imap <C-k> <Plug>(neocomplcache_snippets_expand) smap <C-k> <Plug>(neocomplcache_snippets_expand) inoremap <expr><C-g> neocomplcache#undo_completion() inoremap <expr><C-l> neocomplcache#complete_common_string() " Recommended key-mappings. " <CR>: close popup and save indent. inoremap <expr><CR> neocomplcache#smart_close_popup() ."\<CR>" " <TAB>: completion. inoremap <expr><TAB> pumvisible() ? "\<C-n>" : "\<TAB>" " <C-h>, <BS>: close popup and delete backword char. inoremap <expr><C-h> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><BS> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><C-y> neocomplcache#close_popup() inoremap <expr><C-e> neocomplcache#cancel_popup() ...

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

< Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >