Search Results

Search found 177198 results on 7088 pages for 'not programming'.

Page 459/7088 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Credentials work for SSMS but not (ODBC) LogParser script

    - by justSteve
    Via SSMS I'm able to connect and navigate the server/db in question. but trying to connect via a logparser script the same credentials fail. I'm trying to execute this from the same box on which the server's running. the username is owner/dbo of the db. The db has mixed mode authentication. [linebreaks for clarity] C:\TTS\tools\LogParserc:\tts\tools\logparser\logparser file:c:\tts\tools\logparser\errors2SQL.sql?source="C:\inetpub\logs\LogFiles\W3SVC8\u_ex100521.log" -i:IISW3C -o:SQL -createTable:ON -oConnString:"Driver={SQL Server Native Client 10.0};Server=servername\SQLEXPRESS;db=Tter;uid=logger2;pwd=foo" -stats:OFF Task aborted. Error connecting to ODBC Server SQL State: 28000 Native Error: 18456 Error Message: [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'logger2'. C:\TTS\tools\LogParser

    Read the article

  • Microsoft Office "Read-only" warning not appearing on Samba shares on Mac OS X Server

    - by bongo
    Hi, some of my users don't get the "read-only" warning (but "read-only" does appear on the title bar and the document is indeed opened read-only) when opening Office 2007 documents already opened by another user. We run the samba share off an XSan volume under Mac OS X 10.5.8 Server. Strict locking is on but oplocks are off (from Server Admin). At home, with a simple samba share in 10.6.3 server, it works correctly. Any ideas? or is this a 10.5.8 behavior?

    Read the article

  • php-fpm not working several days,return 'No input file specified'

    - by Magic
    My server running ubuntu 64bit, nginx, php-fpm. Everything is working well. But several days after. The browser display 'No input file specified'.After I restart php-fpm. Everything run well again.But this situation occur again and again.So I must restart the php-fpm several days.Anyone know what's the problem? nginx -V output sshadmin@ubuntu:~$ nginx -V nginx: nginx version: nginx/0.9.7 nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5) nginx: TLS SNI support enabled nginx: configure arguments: --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module

    Read the article

  • Nagios check_host_alive and check_ping not showing host as down

    - by Kyle
    I am using the check_host_alive command to send 5 packets every minute to all my routers at remote locations. I noticed today I received a notification from The AT&T Global Client Support Center that a router was down (which can take 5-30 minutes to send these notices out) and never received a notice from Nagios. I went onto Nagios and it is was showing the host as alive with a latency of 0ms. This tells me it is seeing the automated response from my router in the data center that, "TTL expired in transit" as a reply from the remote router. Is there anyway for me to tell nagios to check where the reply is comming from? I feel like other people have to of had this issue... I tested it with the check_ping command and it produced the same results. I have the command defined has %hostname% and the proper IP in the host definition, and it works fine for telling me the latency is high. Any ideas are welcome, I have already exercised my Google skills with no results. EDIT: root@IM-UBTU:/# /usr/local/nagios/libexec/check_ping -H 192.168.250.1 -w 100.0,10% -c 200.0,20% -vvv CMD: /bin/ping -n -U -w 10 -c 5 192.168.250.1 Output: PING 192.168.250.1 (192.168.250.1) 56(84) bytes of data. Output: From 10.69.10.2 icmp_seq=1 Time to live exceeded It knows something is wrong why doesn't it give me a warning?

    Read the article

  • s3fs not maintaining uid/gid

    - by Publiccert
    I'm mounting s3 using s3fs with the following command: sudo /usr/local/bin/s3fs -o use_cache=/tmp,uid=1000,gid=1000,allow_other svn.domain.com /svn Allow_other is confirmed to be working along with cache. However, no variation/different placement of the uid/gid's is having any affect in the meta data on S3. Both the uid and gid come up as '0' in S3. I have created a user called svn with uid of 1000 to see if that would fix the problem. No luck.

    Read the article

  • PTR record not valid for all domains

    - by charnley
    We have an issue sending emails to certain domains, namely Time Warner and Cox. Last week, we decommissioned our Exchange 2003 server and now our Exchange 2010 server is doing all of the transport for our domain. We run our own authoritative name servers, so we are in charge of the DNS and have modified our PTR record to reflect the new server. All mailflow is working except for these 2 domains. When I telnet on port 25 to the mail servers for Cox and Time Warner I am receiving errors. For Cox the error is: 554... rejected - no rDNS And when I telnet to port 25 to the Time Warner mail server we get this: 554 5.7.1 - Connection refused. IP name lookup failed for x.x.x.x I have run through the outbound SMTP test on Microsoft Remote Connectivity Analyzer and get 100% completely successful results. MXToolbox comes up with all successful tests on SMTP as well, showing correct reverse banner check, and no blacklisting. DNSQueries.com shows a valid reverse DNS entry as well for us. Outbound emails to these 2 domains continue to sit in the queue. Any ideas or advice would be greatly appreciated. Thanks!

    Read the article

  • System files not hidden Win XP

    - by ULTRA_POROV
    I recently copied my win xp install from one hard to another(via ubuntu live cd) Everything went well, except that all the hidden system files(boot.ini, ntdlr, io.sys) are no longer hidden. I tried this: attrib +s c:/IO.sys No effect. The files are still visible even though the setting to hide system files is checked. Any solutions? Thanks.

    Read the article

  • 27 days after domain transfer name servers not propogated

    - by Thom Seddon
    We recently bought the domain: embarrassingnightclubphotos.com 7 days after accepting the transfer the domain finally transferred to our registrar and we immediately changed the name servers from ns*.netregistry.net to amy.ns.cloudflare.com and cody.ns.cloudflare.com 20 days after changing the name servers, the majority of tests show that both old and new nameservers are still being reported: http://intodns.com/embarrassingnightclubphotos.com http://www.whatsmydns.net/#NS/embarrassingnightclubphotos.com We are now ready to launch the new site but this issue is plagueing us as a high proportion of the traffic is still receiving the old nameserves and so hitting the old server. You can tell if you have hit the old or new server as the old server has the value "A" for the meta tag "Location" and the new server has "U". (The old server just has an iframe too!) I have never had this problem before - who is causing this and how should we go about reaching a resolution? Thanks

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode

    - by George
    I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Strange FTP issues - some files are not downloaded

    - by FractalizeR
    I have a machine, which cannot fetch some files from remote servers by FTP. Machine is powered by CentOS. I tested FTP on three files: 12.09.2012 21:21 166 007 ll091212.002 13.09.2012 11:32 23 040 ll091212.003 13.09.2012 11:50 61 313 ll091212.004 From them, I can always successfully download only one - ll091212.004. Two others are downloaded by about 90% (I can see them on disk) and then FTP transfer hangs without any error messages. I move files, copy them about the remote server - no luck. Another machine from the same subnet can download all three of them easily. I just don't know what's the reason of this.

    Read the article

  • SSH connection problem - allowed from LAN but not WAN

    - by Kerem Ulutas
    I tried to setup my Arch Linux installation to be an SSH host, but here is the thing: I can ssh localhost, it fails to login via public key and asks for username and password, but still able to login. When I try ssh my_wan_ip it gives ssh_exchange_identification: Connection closed by remote host error. I've read all topics about this error and none helped me. By the way, just confirmed, it gives ssh: connect to host my_dyndns_hostname port 22: Connection refused from another machine (outside of my network, it has different wan ip). I have sshd: ALL in "hosts.allow", ALL:ALL in "hosts.deny". I am able to connect to my own pc via ssh, ping my own pc, but my ssh setup seems to be the problem, it gives that annoying error when I try to ssh from wan. /etc/ssh/ssh_config /etc/ssh/sshd_config And finally, here is the debug output for both sshd and ssh: (i ran ssh command and i took output to sshd debug after that): sshd debug ssh debug I can edit my question according to your needs. Just ask for any more information needed. BTW I have no iptables running. I have one cable dsl modem connected to a asus wl-330gE wireless access point, they both have their firewall disabled. I configured NAT so port 22 is directed to the pc I'm having this trouble. Any help appreciated, thanks..

    Read the article

  • not able to register sip user on red5server, using red5phone

    - by sunil221
    I start the red5, and then i start red5phone i try to register sip user , details i provide are username = 999999 password = **** ip = asteriskserverip and i got --- Registering contact -- sip:[email protected]:5072 the right contact could be --- sip :99999@asteriskserverip this is the log: SipUserAgent - listen -> Init... Red5SIP register [SIPUser] register RegisterAgent: Registering contact <sip:[email protected]:5072> (it expires in 3600 secs) RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout RegisterAgent: Failed Registration stop try. Red5SIP Client leaving app 1 Red5SIP Client closing client 35C1B495-E084-1651-0C40-559437CAC7E1 Release ports: sip port 5072 audio port 3002 Release port number:5072 Release port number:3002 [SIPUser] close1 [SIPUser] hangup [SIPUser] closeStreams RTMPUser stopStream [SIPUser] unregister RegisterAgent: Unregistering contact <sip:[email protected]:5072> SipUserAgent - hangup -> Init... SipUserAgent - closeMediaApplication -> Init... [SIPUser] provider.halt RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout please let me know if i am doing anything wrong. regards Sunil

    Read the article

  • php rsync with exec() not working

    - by mojeime
    Why this: rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder works from cronjob and this: exec("rsync -avz -e ssh /home/userneme/folder [email protected]:/var/www/folder"); doesn't work. I know exec is working because i have a few places in my appp that do convercion from pdf to jpg with ImageMagick (exec). SOLVED exec is working OK it was a permission issue on remote server. "Local" server is shared reseller account and remote server is my first VPS Ubuntu 10.10 LAMP box. If only I had a system administrator since i'm just a software developer forced to do this and i stink at it :) Thank You all!

    Read the article

  • EC2 Apache Server not reflecting changes to PHP files

    - by Josef van Niekerk
    We're running a Laravel application on an Amazon EC2 server with Apache installed. I've noticed on multiple occasions, that the server doesn't respond to changes in PHP files, even after restarting Apache. For example, if I edit a file that I'm accessing through a URL, and I break the syntax, I don't even get a PHP exception thrown. This is really strange, and has been sticking its head out more frequently these days. Is it possible Apache is caching the PHP files somewhere? Opcode caching perhaps?

    Read the article

  • FTP Upload works from local command line / remote GUI client but not from PHP script

    - by MrOodles
    I originally posted this question at StackOverflow, but I'm beginning to think it's more of a server question. I have installed ProFTPd on an EC2 instance running Ubuntu 10.10. I have managed my proftpd.conf file as well as my server permissions to be able to connect and upload/move files using FTP both remotely using Filezilla, and on the server itself when connecting to 127.0.0.1. The problem I'm running into is when I try to upload/install a file using Joomla's interface. I give Joomla the same login information that I give to Filezilla, and the connection is made in the same fashion. The ftp.log file actually shows that Joomla is able to login to the server: localhost UNKNOWN nobody [17/Jan/2011:14:09:17 +0000] "USER ftpuser" 331 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASS (hidden)" 230 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASV" 227 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "TYPE I" 200 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "STOR /directory/store/location/file.zip" 550 - But it fails when attempting the STOR command. I have traced the problem in the Joomla code to the PHP FTP module. The code (with my trace statements added): if (@ftp_put($this->_conn, $remote, $local, $mode) === false) { echo "\n FTP PUT failed."; echo "\n Remote: $remote ; Local: $local ; Mode: $mode - Either ASCII: ".FTP_ASCII." or Binary: ".FTP_BINARY; echo "\n The user: ".exec("whoami"); JError::raiseWarning('35', 'JFTP::store: Bad response' ); return false; } Trace ouputs: FTP PUT failed. Remote: /directory/store/location/file.zip ; Local: /tmp/phpwuccp4 ; Mode: 2 - Either ASCII: 1 or Binary: 2 The user: www-data And in case you were curious, here is an example of the FTP log when using Filezilla: my_client_ip UNKNOWN nobody [17/Jan/2011:16:45:55 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "MLSD" 226 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "CWD location" 250 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PWD" 257 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PASV" 227 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:07 +0000] "MLSD" 226 3757 my_client_ip UNKNOWN nobody [17/Jan/2011:16:46:37 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "CWD /location" 250 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "STOR file.zip" 226 125317 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "MLSD" 226 497

    Read the article

  • DBD::mysql gives mysql_init not found

    - by highBandWidth
    I have to install a non-admin copy of mysql and perl module DBD::mysql in my home directory. I installed mysql in ~/software/db/mysql and this works since I can start and stop the server and go to the mysql prompt. Then, I downloaded the perl module and installed it using perl Makefile.PL PREFIX=~/myperl/ LIB=~/myperl/lib/lib64/perl5/ --mysql_config=/my_home/software/db/mysql/bin/mysql_config --libs=/myhome/software/db/mysql/lib/libmysqlclient.a make make install I did this to use the statically linked mysql client library. perl -MDBD::mysql -e 1 gives no errors. However, when I actually try to use the module, I get /usr/bin/perl: symbol lookup error: /myhome/myperl/lib/lib64/perl5/x86_64-linux-thread-multi/auto/DBD/mysql/mysql.so: undefined symbol: mysql_init

    Read the article

  • Adobe Dreamweaver CS5 Find and Replace ALT-F not working

    - by wag2639
    I'm using a demo of Adobe Dreamweaver CS5 and I've notice that they no longer open the Find and Replace in a separate window. The problem with this, at least on Windows 7, is that I can't use the normal hotkey shortcuts like ALT-F because its being intercepted by the main Window and brings up the file menu drop down. Is there a fix for this?

    Read the article

  • User directory in PATH in zsh: ~ Does Not Work

    - by Yar
    I'm doing this in my .zshrc PATH="~/scripts:$PATH" and if I do echo $PATH it appears as the first thing in the path. Yet this directory isn't included in the executable path (nor for tab-completion). What am I doing wrong? ls ~/scripts shows the directory as expected. Edit: This works, though... I guess ~ doesn't work in the path? #PATH="~/scripts:$PATH" PATH="/Users/yar/scripts:$PATH"

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >