Search Results

Search found 25797 results on 1032 pages for 'source formatting'.

Page 819/1032 | < Previous Page | 815 816 817 818 819 820 821 822 823 824 825 826  | Next Page >

  • OpenVpn Iptables Error

    - by Mook
    I mean real newbie - linux here.. Please help me configuring my openvpn through iptables. My main goal here is to open port for regular browsing (80, 443), email (110, 25), etc just like isp does but i want to block p2p traffic. So I will need to open only few port. Here are my iptables config # Flush all current rules from iptables # iptables -F iptables -t nat -F iptables -t mangle -F # # Allow SSH connections on tcp port 22 (or whatever port you want to use) # iptables -A INPUT -p tcp --dport 22 -j ACCEPT # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT DROP #using DROP for INPUT is not always recommended. Change to ACCEPT if you prefer. iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept packets belonging to established and related connections # iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # #Accept connections on 1194 for vpn access from clients #Take note that the rule says "UDP", and ensure that your OpenVPN server.conf says UDP too # iptables -A INPUT -p udp --dport 1194 -j ACCEPT # #Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT #10.8.0.0 ? Check your OpenVPN server.conf to be sure iptables -A FORWARD -j REJECT iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source 100.200.255.256 #Use your OpenVPN server's real external IP here # #Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward iptables -A INPUT -p tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp --dport 26 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 110 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -L -v But when I connect to my vpn, i can't browsing and also got RTO on pinging yahoo, etc

    Read the article

  • User cannot access a system DSN on Windows Server 2008

    - by Ra Osolage
    We run our SQL Server services using a low privileged domain account. That account is NOT a local admin on the OS. Only access I give the user account is assigned during install of SQL: full control over its mount points and then everything else is granted by the SQL Server 2005/2008 installer. I need to create a linked server in SQL Server 2008 to an ODBC data source. So I remoted into the computer using my domain account, which is part of a group that DOES have local admin privs to the OS. I created a system DSN and configured it to connect to another SQL Server. The DSN works perfectly when I test it. However, when I try to create the linked server, I get an error. It appears to me that the DSN is invisible to the domain account that SQL Server is running as. It seems that this problem is only happening to me on Windows 2008 servers. Does anybody know whether there's anything that you need to do after creating a DSN to make it visible for other users to access?

    Read the article

  • Centos CMake Does Not Install Using gcc 4.7.2

    - by Devin Dixon
    A similar problem has been reported here with no solution:https://www.centos.org/modules/newbb/print.php?form=1&topic_id=42696&forum=56&order=ASC&start=0 I've added and upgraded gcc to centos cd /etc/yum.repos.d wget http://people.centos.org/tru/devtools-1.1/devtools-1.1.repo yum --enablerepo=testing-1.1-devtools-6 install devtoolset-1.1-gcc devtoolset-1.1-gcc-c++ scl enable devtoolset-1.1 bash The result is this for my gcc [root@hhvm-build-centos cmake-2.8.11.1]# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/centos/devtoolset-1.1/root/usr --mandir=/opt/centos/devtoolset-1.1/root/usr/share/man --infodir=/opt/centos/devtoolset-1.1/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --disable-build-with-cxx --disable-build-poststage1-with-cxx --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-ppl --with-cloog --with-mpc=/home/centos/rpm/BUILD/gcc-4.7.2-20121015/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.7.2 20121015 (Red Hat 4.7.2-5) (GCC) And I tried to then install cmake through http://www.cmake.org/cmake/resources/software.html#latest But I keep running into this error: Linking CXX executable ../bin/ccmake /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: CMakeFiles/ccmake.dir/CursesDialog/cmCursesMainForm.cxx.o: undefined reference to symbol 'keypad' /opt/centos/devtoolset-1.1/root/usr/libexec/gcc/x86_64-redhat-linux/4.7.2/ld: note: 'keypad' is defined in DSO /lib64/libtinfo.so.5 so try adding it to the linker command line /lib64/libtinfo.so.5: could not read symbols: Invalid operation collect2: error: ld returned 1 exit status gmake[2]: *** [bin/ccmake] Error 1 gmake[1]: *** [Source/CMakeFiles/ccmake.dir/all] Error 2 gmake: *** [all] Error 2 The problem seems to come from the new gcc installed because it works with the default install. Is there a solution to this problem?

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • CVSROOT problem because of username string

    - by jatanp
    Hi, I have always been SVN user but currently I have to use CVS as the source repository. I am quite new to CVS and really got confused many a times (reason being I always tried to access CVS like SVN !) However now I am really stuck in one problem wherein I am not able to do any cvs operations through cygwin. Actually I have checked out the code using WinCVS and while doing that it created the CVSROOT as following, :pserver;username=<user_name>;password=<pwd>:<serverip>:/cvs/repository However whenever I try to use cvs command in cygwin (after setting CVSROOT variable using export) it fails with following error: cvs update: Unknown option (`username') in CVSROOT. cvs update: in directory .: cvs update: ignoring CVS/Root because it does not contain a valid root. cvs update: Unknown option (`username') in CVSROOT. cvs [update aborted]: Bad CVSROOT: `:pserver;username=<user_name>;password=<pwd>:<serverip>:/cvs/repository'. However the command works fine, if invoked through dos command prompt. I got to know that on DOS prompt, cvs command is provided by CVSNT whereas in cygwin it's some different package. Please let me know where I have made a mistake and how it can be corrected ! I need cvs to work inside cygwin for some scripting purpose.

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • ntpdate -d Server dropped Strata too high

    - by AndyM
    I cannot sync with a NTP source thats coming from an internal router/firewall. Anyone help ? ntppdate -d 192.168.92.82 6 Jun 11:57:30 ntpdate[5011]: ntpdate [email protected] Tue Feb 24 06:32:26 EST 2004 (1) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) 192.168.92.82: Server dropped: strata too high server 192.168.92.82, port 123 stratum 16, precision -19, leap 11, trust 000 refid [73.78.73.84], delay 0.02591, dispersion 0.00002 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 6:28:16.000 originate timestamp: d1972e03.0ae02645 Mon, Jun 6 2011 11:44:19.042 transmit timestamp: d197311b.0ffac1d2 Mon, Jun 6 2011 11:57:31.062 filter delay: 0.02609 0.02591 0.02594 0.02596 0.00000 0.00000 0.00000 0.00000 filter offset: -792.020 -792.020 -792.020 -792.020 0.000000 0.000000 0.000000 0.000000 delay 0.02591, dispersion 0.00002 offset -792.020152 6 Jun 11:57:31 ntpdate[5011]: no server suitable for synchronization found Edit The server I'm being asked to sync to is a firewall , and I've now been told that it is not syncing with anything. So I suppose I need to know if I can force my server to sync with a server that is stratum 16 i.e not sync'd. Is that possible ?

    Read the article

  • ASP.NET AJAX, WebSeal Junctions, and Sessions

    - by powella
    I've run up across a problem with ASP.NET AJAX (hooked up to WebServices directly) and accessing our site through a WebSeal junction. Listing 11. On this page; http://www.ibm.com/developerworks/tivoli/library/t-ajaxtam/index.html explains that requests to pages which do not result in a content type of text/html are not sent with cookie data. Hence, no session. ASP.NET AJAX requests are returned with a content type of "application/json; charset=utf-8". As such, the WebSeal junction is not appending the Session Cookie to the request. This results in our WebService seeing the user as invalid, due to no session information. The Junction has been setup properly with the -J parameter (thats an uppercase J, which appends the required script for WebSeal to the bottom of the page - this prevents forcing IE into quirks mode.) and we've confirmed that the necessary script exists in the output source. I'm up for any suggestions at this point, as I'm out of ideas. FWIW, the site runs perfectly when not accessed through the WebSeal Junction.

    Read the article

  • Upgrading Ubuntu to 9.04 breaks ATI video card driver, VESA and ati/radeon drivers

    - by Neil
    I upgraded my Ubuntu 8.10 to 9.04, and it not only broke the ATI proprietary fglrx driver, but also the ability to use the VESA or open-source ati or radeon drivers. I have an ATI RV610 which is an ATI Radeon HD 2400 XT. I have Linux Kernel 2.6.27-14-generic and 2.6.28-13-generic. With fglrx, vesa, ati and radeon, the Xserver hangs the machine as soon as it starts by invoking X or startx, which is seen by observing that caps lock doesn't work. There's nothing useful in /var/log/Xorg.0.log, no errors at all. This is with either kernel. When I download a new proprietary driver from ATI, I install it successfully on kernel 2.6.27, and it doesn't hang when X starts up, but it just shows a blank screen and does nothing. I also can't CTRL+ALT+Backspace out of X at this point. In all the years I've used ATI's Linux drivers, this has happened almost every time I've upgraded my kernel, but it's been fixable with much effort. This time I'm really stuck. Does anyone know how to fix these problems?

    Read the article

  • what does the asterisk mean after a filename if you do ls -l

    - by James.Elsey
    I've done an ls -l inside a directory, and my files are displaying like this : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ll total 9512 drwxr-xr-x 3 james james 4096 2010-05-07 19:48 ./ drwxr-xr-x 6 james james 4096 2010-08-21 20:43 ../ -rwxr-xr-x 1 james james 341773 2010-05-07 19:47 adb* -rwxr-xr-x 1 james james 3636 2010-05-07 19:47 android* -rwxr-xr-x 1 james james 2382 2010-05-07 19:47 apkbuilder* -rwxr-xr-x 1 james james 3265 2010-05-07 19:47 ddms* -rwxr-xr-x 1 james james 89032 2010-05-07 19:47 dmtracedump* -rwxr-xr-x 1 james james 1940 2010-05-07 19:47 draw9patch* -rwxr-xr-x 1 james james 6886136 2010-05-07 19:47 emulator* -rwxr-xr-x 1 james james 478199 2010-05-07 19:47 etc1tool* -rwxr-xr-x 1 james james 1987 2010-05-07 19:47 hierarchyviewer* -rwxr-xr-x 1 james james 23044 2010-05-07 19:47 hprof-conv* -rwxr-xr-x 1 james james 1939 2010-05-07 19:47 layoutopt* drwxr-xr-x 4 james james 4096 2010-05-07 19:48 lib/ -rwxr-xr-x 1 james james 16550 2010-05-07 19:47 mksdcard* -rw-r--r-- 1 james james 205851 2010-05-07 19:48 NOTICE.txt -rw-r--r-- 1 james james 33 2010-05-07 19:47 source.properties -rwxr-xr-x 1 james james 1447936 2010-05-07 19:47 sqlite3* -rwxr-xr-x 1 james james 3044 2010-05-07 19:47 traceview* -rwxr-xr-x 1 james james 187965 2010-05-07 19:47 zipalign* What does that asterisk mean? I'm also unable to run a particular file, as follows : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ./emulator bash: ./emulator: No such file or directory EDIT : I'm trying to get Eclipse to use emulator, but it keeps complaining the files does not exist, yet it is here?

    Read the article

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

  • How to get rid of auto-generated sequence number in network's device name in Windows?

    - by Piotr Dobrogost
    Every time one plugs in the same usb wireless adapter in a new usb port, Windows creates new network device with auto-generated sequence number which looks like this Wireless-N USB Network Adapter #2, Wireless-N USB Network Adapter #3, ... The name of a device is being displayed as part of network's information in Control Panel|Network Connections. How can I get rid of this sequence number? I found out device name which is displayed in network's information is kept in the FriendlyName REG_SZ value under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_[device specific string]\[usb port specific string] However when I try to modify this value I get error Cannot edit FriendlyName: Error writing the value's new contents. I tried to delete extra keys under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_13B1&PID_0029 but got Cannot delete KEY NAME: Error while deleting key. error. Trying to solve this problem I followed this answer but trying to change owner with Replace owner on subcontainers and objects option checked I got this error - Registry Editor could not set owner on the currently selected, or some of its subkeys. To find out which subkey is the source of problem I tried changing owner of each subkey. After successfully changing owner of Properites subkey I saw it has subkeys which were previously hidden. Now trying to change owner of these subkeys looks like this: Any idea how to delete these keys?

    Read the article

  • mysql-python on Snow Leopard with MySQL 64-bit

    - by Derek Reynolds
    Can't seem to get mysql-python to work on Snow Leopard for the life of me. Currently using the default version of python that ships with Snow Leopard (python 2.6.1). Installed MySQL 5.1.45 x86_64. I download the source for mysql-python http://sourceforge.net/projects/mysql-python/ and compile with the following commands: tar xzf MySQL-python-1.2.3c1.tar.gz cd MySQL-python-1.2.3c1 ARCHFLAGS='-arch x86_64' python setup.py build ARCHFLAGS='-arch x86_64' python setup.py install And am getting the following error when I try to import: Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.6-universal/egg/MySQLdb/__init__.py", line 19, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so, 2): no suitable image found. Did find: /Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so: mach-o, but wrong architecture Any ideas? Or the best route for starting over.

    Read the article

  • Why is my Current Printer unavailable in Office ?

    - by cros
    Whenever I try to print any document from Microsoft Office 2007 in Windows Vista 64-bit there is a great possibility that the print job will fail with the following error message: Current printer is unavailable. Select another printer. Only problem is no printer works, not even Bullzip PDF Printer. The only way to resolve this that I have found so far is a reboot, but I don't want to do that all the time. I am using Windows Vista 64-bit. I've had the problem using both SP1 and SP2. The problem occurs on both locally installed and network printers, as well as the virtual printer Bullzip PDF Printer. My primary source of the problem has been Excel, but the error has also occurred in Word. Changing the default printer and restarting the Microsoft Office-application solves this temporarily, but not permanently. Google:ing the error message returns a lot of questions but no solutions, so seems like a frequent problem. What could be a permanent solution for this problem? UPDATE: It seems that my problem stems from me opening MS Office applications by opening a document from Total Commander with administrative rights. This somehow makes the applications not find the printers. Opening MS Office applications either from the Start menu or by opening a document in a non-administrator Explorer allows me to print.

    Read the article

  • Troubleshooting transient Windows I/O "The parameter is incorrect." errors

    - by Kevin
    We have a set of .Net 2.0 applications running on Windows 2003 servers which have started experiencing transient "The parameter is incorrect." Windows I/O errors. These errors always happen accessing a file share on a SAN. The fact that this error has happened with multiple applications and multiple servers leads me to believe that this is an infrastructure issue of some sort. The applications all run under the same domain account. When the errors occur they generally will resolve themselves within a few minutes. I can log in to the application server once the error starts occurring and access the file share myself with no problems. I have looked at the Windows event logs and haven't found anything useful. Due to the generic nature of "The parameter is incorrect.", I am looking for additional troubleshooting suggestions for this error. A sample stack trace is below. Note that while this example was during a directory creation operation, when the problem is occurring, this exception is thrown for any file system operations on the share. Exception 1: System.IO.IOException Message: The parameter is incorrect. Method: Void WinIOError(Int32, System.String) Source: mscorlib Stack Trace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.Directory.InternalCreateDirectory(String fullPath, String path, DirectorySecurity dirSecurity) at System.IO.Directory.CreateDirectory(String path, DirectorySecurity directorySecurity) at System.IO.Directory.CreateDirectory(String path)

    Read the article

  • Compiling realtime kernel from RHEL 6 MRG sources on CentOS 6

    - by Sashka B
    I'm trying to compile kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm from RHEL6 MRG source RPMs on Centos 6 x86_64 system. It's first time I'm doing this, so I did research on how to do this properly. From what I found, I did: rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm cd ~/rpmbuild/SPECS nano kernel-rt.spec rpmbuild -bb kernel-rt.spec 2> build-err.log | tee build-out.log in kernel-rt.spec I've disbleed compilation of variants I don' need - ie compile only rt and firmware. Also defined not to build debuginfo. After compilation finished, I've got in ~/rpmbuild/RPMS/x86_64/ two files: kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm kernel-rt-devel-2.6.33.9-rt31.75.el6rt.x86_64.rpm but when I tried to install kernel, I got error message: $ sudo rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm error: Failed dependencies: kernel-rt-firmware = 2.6.33.9-rt31.75.el6rt is needed by kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64 There was no folder ~/rpmbuild/RPMS/noarch - where I would expect it to show up. Also, I've tried rpmbuild --rebuild kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm, but got same results... What am I doing wrong? I've seen this question, but it suggests what I tried already and I want to build kernel myself, not use pre-built from SLC.

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

  • ERROR: Can't find the archive-keyring

    - by 23tux
    I'm trying to upgrade my Debian Lenny to Squeeze. I've replaced the word lenny to squeeze in sources.list and ran apt-get clean apt-get update apt-get dist-upgrade But after a while, I get this error Preconfiguring packages ... Setting up debian-archive-keyring (2010.08.28) ... ERROR: Can't find the archive-keyring Is the ubuntu-keyring package installed? dpkg: error processing debian-archive-keyring (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: debian-archive-keyring E: Sub-process /usr/bin/dpkg returned an error code (1) So I tried to install apt-get -f install debian-archive-keyring and I got the same error. Then I tried to install apt-get -f install ubuntu-keyring and I got this error: Reading package lists... Done Building dependency tree Reading state information... Done Package ubuntu-keyring is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package ubuntu-keyring has no installation candidate Maybe I have the wrong sources in my sources.list: deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian/ squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian/ squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free Hope anyone can help me, thx, tux

    Read the article

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Cisco ASA intermittently fails to see traffic

    - by DrStalker
    users | Mikrotik -- Internet | ASA | ServerA and ServerB I'm trying to troubleshoot a problem with a new Cisco ASA 5505. The network design is as above - the Microtik is the existing router, ServerA and ServerB used to plug directly into it. ServerA has IP 10.30.1.10, ServerB has IP 10.30.1.11 The ASA is configured with no NAT, a "allow anything" firewall, and uses the microtik as its default gateway. In effect, it is currently a simple IP router; the firewall and VPN stuff will all come later once the basics are working. Th problem is access to ServerA and ServerB is erratic - sometimes it will work, sometimes it will fail. It can fail for either one of the servers only, or both. When it is working: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show the incoming connections. When it is failing: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show nothing reaching the ASA. This can fail for one server only (e.g.: the Mikrotik is putting out packets to 10.30.1.10 and 10.30.1.11, but the ASA is only seeing packets arrive destined for 10.30.1.11) It can fail for one source only (e.g.: ClientA on the users network can ping 10.30.1.11, but clientB cannot) The problem can also be seen from the mikrotik router itself; sometimes it can ping ServerA and ServerB, sometimes it can only ping one of them What could be causing this? I can't think of any possible cause that is intermittent and could explain why the problem may occur for one destination server and not others. edit: Link to ASA config

    Read the article

  • PostgreSQL update problems with uuid

    - by kdl
    Hi, I am trying to run yum update on my CentOS 5.2 box and keep getting this message: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib I ran yum update postgresql separately and now it's 8.3.8. I also downloaded uuid-1.6.2 and built it from source, but I still get the same result. yum update -d6 uuid gives me this at the end: --> Running transaction check ---> Package uuid.i386 0:1.6.1-3.el5.kb set to be updated Checking deps for uuid.i386 0-1.6.1-3.el5.kb - u Checking deps for uuid.i386 0-1.5.1-4.rhel5 - None postgresql-contrib requires: libossp-uuid.so.15 --> Processing Dependency: libossp-uuid.so.15 for package: postgresql-contrib Needed Require is not a package name. Looking up: libossp-uuid.so.15 Potential Provider: uuid.i386 0:1.5.1-4.rhel5 Mode is u for provider of libossp-uuid.so.15: uuid.i386 0:1.5.1-4.rhel5 Mode for pkg providing libossp-uuid.so.15: u Cannot find an update path for dep for: libossp-uuid.so.15 Searching pkgSack for dep: libossp-uuid.so.15 Potential match for libossp-uuid.so.15 from uuid - 1.5.1-4.rhel5.i386 Matched uuid - 1.5.1-4.rhel5.i386 to require for libossp-uuid.so.15 uuid - 1.5.1-4.rhel5.i386 is in providing packages but it is already installed, removing. --> Finished Dependency Resolution Dependency Process ending Error: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib How can I resolve this situation? Thanks

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • error in php not logging or displaying

    - by Grant M
    I can not get any errors to display on screen on write to a log file. When check phpinfo() print out I have same value of master a local for display_errors On display_startup_errors On error_log /var/log/php.log error_reporting E_ALL & ~E_NOTICE log_errors On log_errors_max_len ls -l /var/log/php.log is -rw-rw-rw- 1 root root 0 Jun 21 07:47 /var/log/php.log for /var and /varlog drwxrwxrwx 23 root root 4096 Jun 2 11:13 var when there is an error in the code the page the browsers shows nothing and browser says there is no source for the page. Any suggestions of where else to look or change to errors to appear somewhere (anywhere would be good) edit: my error script is now <?php ini_set('display_errors', 1); error_reporting(E_ALL); echo "print from error.php 2 "; error //print from erroerror to cuase logging to happen. ?> this will print on display and to log Notice: Use of undefined constant error - assumed 'error' in /var/www/piku_dev2/error.php on line 7 but if I put garbage like #@$%$ on the error line I get nor error messages anywhere. Edit2: The problem turned out to in the httpd.conf file. I don't know what it was yet as it was fixed y someone else.

    Read the article

< Previous Page | 815 816 817 818 819 820 821 822 823 824 825 826  | Next Page >