Search Results

Search found 62161 results on 2487 pages for 'set difference'.

Page 942/2487 | < Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >

  • Can't Connect w/ SQL Management Studio After Domain Change

    - by Sam
    Our old Small Business Server 2003 (acting as our domain controller) was on the fritz, so we replaced it with a new Windows Server 2008 box and set the server up as our new domain controller. In hindsight, it may have been a mistake, but we set up the new server as a replacement and tried to keep as much the same as possible, including the DOMAIN name. The problem was, that even though the domain name was the same, the guest computers somehow still realized it was not the exact same domain. We had to unjoin and rejoin the domain and port over everyone's documents and settings. This morning, when I attempted to connect to my local SQL Server Instance, it was saying that my login failed. When I tried to use the SQL Management Studio, it throws the error "Package 'Microsoft SQL Management Studio Package' failed to load" on startup, then exits without giving me a chance to change the login. I am using Mixed Authentication and have an administrative account as a backup. Ideas? If there is a more appropriate stack, please let me know where to put it.

    Read the article

  • Sendmail : Mail delivery of same domain to internal or external mail server.

    - by Silkograph
    Its bit difficult to explain but very simple problem. We have internal sendmail server and hosted server. Both are set to same domain name. We have mixed mail accounts. For example we have two user in one office. [email protected] is local only [email protected] is internal plus external. Internal means we create user on local linux box where sendmail is set. External means we create user on local and hosted server. [email protected] can send mails to any internal user created on Linux box where sendmail is installed. But he can not send mail to outside domain and no mail can be sent to him as there is no account created on external hosted server. [email protected] can send mails to internal as well as all other domains through sendmail's smart_host feature, which uses hosted server's smtp. [email protected] can get all external emails internally through Fetchmail on linux box. Now we have third user [email protected] who will be always outstation and can use external server only. So I can not create account on local linux box for [email protected] because his mail will get delivered locally only. I don't want to create alias and send his mails to gmail or yahoo's account. I want to send emails to his external account from any internal user. How this can be done? Thanks in advance.

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Monitoring whether Google Apps email address is reachable

    - by Acorn
    Backstory: I bungled things a bit the other day, and inadvertantly deleted the DNS overrides for my domain including the MX records that point to Google Apps, causing 2 days of lost emails. What I want: I want to be able to monitor the email address/account so that I can be alerted if for any reason something has gone wrong and emails aren't arriving. Thoughts: I was thinking there might be a way to test the email without having to send an actual message. Does this exist? This wouldn't help if the DNS has reset itself to a different mailserver would it? The other idea was sending periodic emails to check the address it working. How would you automate this? You'd need to somehow check that the email address had arrived as well as checking if it had bounced. Are there any scripts that exist that would do something like this? What would be the best method? Maybe a combination of checking that the MX records for the domain are set to what they're supposed to be set to, and sending automatic test emails to check that things are still functioning on the Google Apps end?

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • VirtualHost on WAMPSERVER not working

    - by Martin C
    I currently have WAMPSERVER 2.2 set up on my PC. I'm trying to set up a new host called pplocal.local I made the changes in httpd.conf to uncomment this: Include conf/extra/httpd-vhosts.conf Then, I edited httpd-vhosts.cong and I added the following: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/" ServerName localhost </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/pp/" ServerName pplocal.local <Directory "E:/wamp2/www/pp/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> CustomLog "E:\wamp2\logs\pplocal-access.log" common ErrorLog "E:\wamp2\logs\pplocal-error.log" Im my windows 'hosts' file I added: 127.0.0.1 localhost 127.0.0.1 pplocal.local Then, I restarted apache. If I type localhost in my browser I get the files at E:/wamp2/www/ If I type pplocal.local in my browser I get the files at E:/wam2/www/ instead of those at E:/wamp2/www/pp/ I have followed several tutorials and can't see what I'm doing wrong. I'm new to editing the files associated with apache so any advice is appreciated. Thanks

    Read the article

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • Find the IP of the router between me and main router/gateway?

    - by Crash893
    I have a Netgear router that is the main router/gateway in and out of the network. Then I have a Linksys 54g router with WiFi that we use as are main WiFi access point The Cat5 runs from the router to the WiFi routers LAN port (not WAN). And then if you connect to that WAP you are basically on the network. I want to be able to find out what the IP address of that WAP is but so far I can't figure out how to: I've tried doing a tracert to the gateway IP but I get nothing. I've scanned all the IP addressed in the network. I've gone to the Netgear main router and looked under attached devices but it doesn't say anything. Any ideas on how I can figure out how to administer it? (I do have limited physical access but my attempt to plug into it came back with a faulty IP (in the 169 range where the rest of the office is 192.168...). I never set this router up to begin with so I'm reluctant to just kill it with the reset button because I can't get into see what the settings are set to.

    Read the article

  • How to Eliminate Black Bars from Powerpoint videos ?

    - by appu
    Hi, I am running a digital signage system for my client. The basic installation is a vertically oriented 42" LCD TV with a 1920x1080 screen resolution (reverse of 1920x1080 when setup normally, i.e. landscaped). Please check out the following link for a basic screen divisions layout I want to setup. http://flickr.com/photos/55097319@N03/5410208856 In the division labeled "ppt" I plan to run a powerpoint presentation. The screen division is 360x1476 resolution. As there isn't an option in powerpoint to specify slide size in terms of resolution so according to this article on indezine http://indezine.com/products/powerpoint/books/perfectmedicalpres02.html to get a screen resolution of my preference I divided 360 and 1476 each by 72 which gives me 5"x 20.5" as the slide size for my ppt. After setting up slide size as per above dimensions, I used sizer (http://brianapps.net) to resize my ppt window to 360x1476 so that when I record I do not get any black bars. But after launching recording there are side black bars visible which camtasia records and brings-inside cam studio with black bars. http://www.flickr.com/photos/55097319@N03/5409597049 My question is that after doing the above and as explained in the following video link why do I still get black bars. http://feedback.techsmith.com/techsmith/topics/eliminate_black_bars_in_your_powerpoint_recordings Is there an option in camtasia to stretch the recording to cover up black bars or any other alternative way I can get rid of those black bars while I get a ppt recorded as per my preferred dimensions. Notes: In the techsmith video above it asks to adjust my desktop screen resolution which my display chipset does not allow me to set. I set up show in powerpoint to be "browsed by an individual(window)". Also signage software only supports swf's and video file formats natively and not ppt, pptx etc. Thanks bhavani.

    Read the article

  • Damaged partition after disk image

    - by Charles Gargent
    I am trying to clone/backup a disk with Windows 7 Pro 64bit on it. First I tried Easus Todo Backup and used disk clone option without sector by sector copy. I then plugged in the new drive and I get the following error. "Invalid or damaged Bootable partition" I then plugged the old drive back in and I am greeted with the same error. My next step was to try the sector by sector disk clone, but still I get the same error. I have tried fixing the mbr with the windows disk but that makes no difference. I have tried some other free tools and I get the same error. I have tried this on a different machine running Windows 7 Enterprise 32bit without this problem. I have done some searching and the only thing I can come up with is this post from the Acronis forums http://forum.acronis.com/forum/8254 suggesting that the bios is reading my disk geometry incorrectly. Can anyone shed any light on this, is there a way I can fix this either in the bios or repair the mbr every time I reimage it?

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to 127.0.0.1, and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else.

    Read the article

  • iptables forwarding to a dummy interface

    - by madinc
    Hi, I'm trying to accomplish the following: I have a box with a service listening on a dummy interface (say 172.16.0.1), udp port 5555. Now what I'd like to do is to take packets that arrive on interfaces eth0 (1.1.1.1:5555) and eth1 (2.2.2.2:5555) and forward them to the service on the dummy interface, and have replies go back to clients out the same physical interface they came in. Clients must think they're talking to 1.1.1.1:5555 or 2.2.2.2:5555. I think I need a mix of iptables rules and packet marking, plus some iproute rules (if it's possible at all). What I tried is to catch packets coming in from eth0 and eth1, udp port 5555, and mark them with 1 and 2 respectively, and --save-mark in the connmark. Then I used a DNAT to 172.16.0.1. The service seems to be getting the packets. Now I'm not sure how to do the reverse. It seems that for packets originating from the box, you can't do anything before the routing decision, but that would be the place to restore the marks, and thus make a routing decision based on those. Here's what I have so far: iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j MARK --set-mark 1 iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j MARK --set-mark 2 iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j CONNMARK --save-mark iptables -t nat -A PREROUTING -m mark --mark 1 -j DNAT --to-destination 172.16.0.1 iptables -t nat -A PREROUTING -m mark --mark 2 -j DNAT --to-destination 172.16.0.1 # What next? As I said, I'm not even sure it can be done. To give a bit of background, it's an old OpenVPN installation that cannot be upgraded (otherwise I'd install a recent version that supports multihoming natively). Thanks for any help.

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

  • ADFS Relying Party

    - by user49607
    I'm trying to set up an Active Directory Federation Service Relying Party and I get the following error. I've tried modifying the page to allow <pages validateRequest="false"> to web.config and it doesn't make a difference. Can someone help me out? Server Error in '/test' Application. A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. To allow pages to override application request validation settings, set the requestValidationMode attribute in the httpRuntime configuration section to requestValidationMode="2.0". Example: <httpRuntime requestValidationMode="2.0" />. After setting this value, you can then disable request validation by setting validateRequest="false" in the Page directive or in the <pages> configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. For more information, see http://go.microsoft.com/fwlink/?LinkId=153133. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").] System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection) +11309476 System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection) +82 System.Web.HttpRequest.get_Form() +186 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.IsSignInResponse(HttpRequest request) +26 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.CanReadSignInResponse(HttpRequest request, Boolean onPage) +145 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +108 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +266 `

    Read the article

  • 403.4 won't redirect in IE7

    - by Jeremy Morgan
    I have a secured folder that requires SSL. I have set it up in IIS(6) to require SSL. We don't want the visitors to be greeted with the "must be secure connection" error, so I have modified the 403.4 error page to contain the following: function redirectToHttps() { var httpURL = window.location.hostname+window.location.pathname; var httpsURL = "https://" + httpURL ; window.location = httpsURL ; } redirectToHttps(); And this solution works great for every browser, but IE7. On any other browser, if you type in http://www.mysite.com/securedfolder it will automatically redirect you to https://www.mysite.com/securedfolder with no message or anything (the intended action). But in Internet Explorer 7 ONLY it will bring up a page that says The website declined to show this webpage Most Likely Causes: This website requires you to log in This is something we don't want of course. I have verified that javascript is enabled, and the security settings have no effect, even when I set them to the lowest level I get the same error. I'm wondering, has anyone else seen this before?

    Read the article

  • Can I convert an existing Firefox installation to ESR without a re-install?

    - by Iszi
    It took some jumping through hoops (including a mailing list subscription that I apparently didn't need) but I finally found where to download the Firefox ESR. This is great for fresh installs, but I was wondering if there's a way to simply convert existing installations to the ESR configuration without having to do a full install. As I understand it, the only difference between ESR and regular Firefox will be how they receive updates. After the new standard version of Firefox comes out, ESR releases will only receive critical security updates and bug fixes for the remainder of their support life. Newer versions of Firefox's standard build will have all the latest and greatest features, while ESR releases are meant to provide stability for environments that can't be expected to keep up with a new full version number change as often as Mozilla does them. In regular Firefox, the About screen shows that I am using the "release" update channel. Is switching to ESR really just a matter of switching the update channel? I presume this can be done in about:config by changing app.update.channel and probably also app.update.url. However, I don't know what these values should be for ESR or if anything else should be tweaked. So, is it possible to switch to ESR without a reinstall and, if so, how? (Note: While this question was written originally for Firefox 10, I expect any answers will apply to future ESR versions as well.)

    Read the article

  • Setting up a Network Bridge on Linux VM (Windows 7 Host)

    - by GrandAdmiral
    I would like to use NetEm to simulate a low bandwidth environment while testing an Internet-connected device. My plan is to setup a bridge in a Linux VM (Linux Mint 13) on a Windows 7 host. Unfortunately I'm having trouble setting up the bridge. Then I can use NetEm in the Linux VM to limit the bandwidth to an external device. I went with the following script: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up Then create the bridge and bring it up: brctl addbr br0 brctl setfd br0 0 brctl addif br0 eth0 brctl addif br0 eth1 dhclient br0 ifconfig br0 up When I run that script, I see the following warning: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service smbd reload Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the reload(8) utility, e.g. reload smbd The device connecting to the bridge is able to obtain an IP Address, but it can only ping the IP Address of the bridge (both are 10.2.32.xx). Then after a few minutes, other parts of our network go down. I'm not sure why, but once I kill the bridge the network is fine. Is it possible to setup a network bridge in a Linux VM? Do I need to do something else with the dhclient br0 part of the script? By the way, I'm using VirtualBox. The wired connection is eth0 and the wireless connection is eth1. The wired connection is connecting to the device and the wireless connection is going to the network. Both adapters are set up as bridged adapters with promiscuous mode set to "allow all".

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • PC can't detect second RAM installed

    - by kulwinder
    I have PC with 512 MB RAM installed (motherboard manufacture MICRO STAR, chipset P4M800), pc was running very slow so I decided to upgrade the ram. I installed CPU-Z and check the ram installed on the machine, also had a look at the stick installed. 512 MB PC 3200 400 MHz DDR but my mother supports 200 MHz and it was working ok. So I bought 2GB which I checked on manual that it support upto 2 GB Ram. So I installed 2GB PC 3200 400 MHz same as the old one, I plugged in both eventhough motherboard only support upto 2 GB but system spec only shows 512 (deducts 64 MB shared vga memory) I checked on CPU-Z, it detects both, slot 1 512 MB, slot 2 2048 MB, comparing screen for both slots, both the same, volt 2.5, frequency 166 MHz and 200MHz, only difference on those is 2gb ram shows under timings table 133MHz 166 MHz and 200MHz but 512 MB shows only 166MHz and 200MHz. I checked on Google and can't seems to figure out whats wrong with it. If I only plug in 2GB. Pc doesn't boot up like ram not working.With only 512 MB plugged in seems ok. Please help.

    Read the article

< Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >