Search Results

Search found 32865 results on 1315 pages for 'found'.

Page 154/1315 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • what's the condition for setting a cookie in safari?

    - by Woho87
    Hi! I got problem with safari on mac not sending cookies I'm setting. I can see that they are set in preferences - cookies. But they never sends back to my server. And I'm not setting the cookies in a http 302 status, which was a bug that I found here. There must be at lot of you out there having same issues as me. How did you get it work? And yes I have search countless times and found nothing on this issue

    Read the article

  • Count function calls by name or signature. Gcc, C++

    - by MajesticRa
    I have some c++ written package. Linux, gcc. I can modify compilation process (change Makefile, flags, etc.), but can not change C++ source code. One runs the package with different parameters, it does a job and exits. How to count: 1) Number of calls of function with specific name? 2) Number of calls of functions with specific signature? 3) Number of calls of functions where one of the parameters is of specific type i.e. std::string (type is specified by signature)? 4) and extra Number of calls of functions of STL objects, i.e. std::string copy constructor? (I mean count a number of calls during the run. ) I thought to do it with GDB, but I found it very tough to do (1) and have not found how to do (2)-(4) at all. All acceptable answers I will write here for humanity.

    Read the article

  • Tomcat SSL Configuration

    - by bdares
    I received a SSL cert to use for a Tomcat 6.0 server, ready to use. I configured Tomcat to use it with the following in server.xml: <Connector port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="C:\Tomcat 6.0\ssl\cert" keystorePass="*****" clientAuth="false" sslProtocol="TLS"/> I started Tomcat using the command prompt so I could see any error message as they happened. There were none. The results for accessing different URLS: http://localhost - normal page loads fine https://localhost - browser claims page cannot be found https://localhost:8443 - page cannot be found http://localhost:8443 - offers a certificate, after accepted redirects to https://localhost (I suspect the https:// urls initially offer the certificate which is automatically accepted by the browser, as it was issued by Verisign) How to fix? Edit: I've also tried port="443". Same result.

    Read the article

  • Generic type parameter naming convention for Java (with multiple chars)?

    - by chaper29
    In some interfaces i wrote I'd like to name generic type parameter with more than one character to make the code more readable. Something like.... Map<Key,Value> Instead of this... Map<K,V> But when it comes to methods, the type-parameters look like java-classes which is also confusing. public void put(Key key, Value value) This seems like Key and Value are classes. I found or thought of some notations, but nothing like a convention from sun or a general best-practice. Alternatives i guesed of or found... Map<KEY,VALUE> Map<TKey,TValue>

    Read the article

  • Autoplaynig video/image gallery - with a catch

    - by Ran
    Looking for a (jQuery) video/image gallery to implement on my website, I've found many freely available option. However, I need one that, when in "autoplay" mode and gets to a video file, automatically plays an entire file, and only then moves on to the next image/video. Similar to what Picasaweb are doing when you ask to autoplay an online gallery of your images/videos. I have found, for example, this plugin: http://www.yoxigen.com/yoxview/. Notice that images 6 & 8 are actually video feeds, but when in autoplay mode, are treated just like any other image, i.e. - displayed for a few seconds then moves on to the next file... not what I need. Thank you in advance.

    Read the article

  • PHP & MySQL Problem

    - by Fincha
    I Have a script, getting text from DB and post it on other DB. Problem is, if I have a Text lngen then 840 Words, I can't call this page. Get an error about "Not Found" or "Connection brocken" or what ever. In FF i get no error, only blank page. I found out the Problem is in lenght of the query i send... but how can i fix it??? My it be the Problem, if a query is longer then 6000 Characters?

    Read the article

  • Use Python to search one .txt file for a list of words or phrases (and show the context)

    - by prupert
    Basically as the question states. I am fairly new to Python and like to learn by seeing and doing. I would like to create a script that searches through a text document (say the text copied and pasted from a news article for example) for certain words or phrases. Ideally, the list of words and phrases would be stored in a separate file. When getting the results, it would be great to get the context of the results. So maybe it could print out the 50 characters in the text file before and after each search term that has been found. It'd be cool if it also showed what line the search term was found on. Any pointers on how to code this, or even code examples would be much appreciated.

    Read the article

  • .net File.Copy very slow when copying many small files (not over network)

    - by Guavaman
    I'm making a simple folder sync backup tool for myself and ran into quite a roadblock using File.Copy. Doing tests copying a folder of ~44,000 small files (Windows mail folders) to another drive in my system, I found that using File.Copy was over 3x slower than using a command line and running xcopy to copy the same files/folders. My C# version takes over 16+ minutes to copy the files, whereas xcopy takes only 5 minutes. I've tried searching for help on this topic, but all I find is people complaining about slow file copying of large files over a network. This is neither a large file problem nor a network copying problem. I found an interesting article about a better File.Copy replacement, but the code as posted has some errors which causes problems with the stack and I am nowhere near knowledgeable enough to fix the problems in his code. Are there any common or easy ways to replace File.Copy with something more speedy?

    Read the article

  • HTTP 500 Internal Server Error on IIS 7.5 with MVC3

    - by Tor Haugen
    I am trying to install an MVC3 application on our production server with no luck. The application is from a 3rd party (compiled), and so debugging is not available to me. Besides, I strongly suspect the error occurs before any code in the site has a chance to execute. Our staging server is - as far as I can determine - set up excactly like the production server. Both run Windows Server 2008 Standard R2, both also run a Sharepoint 2010 site (though this install doesn't touch that in any way). IIS is version 7.5, and .NET Framework 4.0 (required by the MVC app) is (recently) installed (by me, with a reboot after). The application is very small and simple and, as far as I can tell sticks to fairly standard functionality - including forms authentication (ie. it doesnt' pull any dirty tricks). The error message shown in the browser is very general: HTTP Error 500.0 - Internal Server Error An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The bit about 'An error message detailing the cause' being in the application event log seems to be just speculation - a pious hope that whatever code actually caused the error will log it. Nothing useful is to be found in the event log (only the very same message, logged by IIS). Module: AspNetInitClrHostFailureModule Notification: BeginRequest Handler: StaticFile Error Code: 0x80070002 Requested URL: http://xxxxxx.xxxxxx.xx:80/ Physical Path: C:\Xxxxxxx\Prod\WebClient Logon Method: Not yet determined Logon User: Not yet determined Using Failed Request Tracing, I have been able to track the error (as also indicated above) to the AspNetInitClrHostFailureModule: 103. -NOTIFY_MODULE_START ModuleName AspNetInitClrHostFailureModule Notification 1 fIsPostNotification false Notification BEGIN_REQUEST 104. -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 105. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) So there you have it. Seemingly, the AspNetInitClrHostFailureModule fails to find some file. So some questions are: What is the AspNetInitClrHostFailureModule? It is not listed in the fairly exhausting list of modules configurable in IIS manager for the site. I have had no success googling it either. Maybe it's secret.. I access the root URL of the site. This is supposed to be redirected to /Account/LogOn by the FormsAuthenticationModule. Why then is the handler StaticFile? Is that a clue? I have tried removing the infamous system.webserver/modules/runAllManagedModulesForAllRequests attribute, and that makes the error go away (but MVC not actually working, of course). I am prepared to specify all necessary modules manually if that's what it takes, but if the AspNetInitClrHostFailureModule is actually needed, I will be just as stuck. Does anyone know, or can anyone direct me to someone who knows, exactly what modules a typical MVC3 application actually needs? This question might well be a duplicate of this one, but he didn't get any useful answer, and also asked less specific questions. So I'll have my own go. Hoping for some help here :) Edit: I have now tried setting up a trivial MVC 3 project on the server. I created a new project using the MVC Application template, compiled it and deployed it to the server. It behaves in exactly the same way. The server simply cannot run MVC 3 projects.

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • Kernel Mode Rootkit

    - by Pajarito
    On the other 3 computers in my family, I believe that we have a kernel-mode rootkit for windows. It appears that the same rootkit is on all of them. We think. We changed all the important passwords from my computer, running linux right now. On all of the infected computers is Symantic Endpoint Protection, because it's free from the university where my mom and dad work. In my opinion symantec is a piece of crap, seeing as it didn't even manager to delete the tracking cookies it found when I tried it on my own computer. The Computers and their set-ups: Computer A: Vista Business; symantec antivirus. runs it as admin, no password. IE8. no other security software other than what comes with windows. IE8 security settings the default Computer B: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, spybot, uses IE8 with default settings, sometimes Firefox Computer C: XP Home Premium; symantec antivirus. runs as normal user, no password, admin account with weak password, uses IE8 with default settings, no other security programs except what came with windows This is what's happening. Cut and pasted from my dad's forum post. -- When I scanned my laptop (Dell XPS M1330 with Windows Vista Small Business), Symantec Endpoint Protection hangs for a while, perhaps 10 seconds or so, on some of the following files 9129837.exe, hide_evr2.sys, VirusRemoval.vbs, NewVirusRemoval.vbs, dll.dll, alsmt.ext, and _epnt.sys. It does this if a run a scan that I set up to run on a new thumbnail drive and it does this even if the thumbnail is not plugged in. It doesn't seem to do this if I scan only the C: drive. I've check for problems with symantec endpoint protection and also with Microsoft Security Essentials and Malwarebytes Anti-Malware. They found nothing and I can't find anything by searching for hidden files. Next I tried microsoft's rootkitrevealer. It (rootkitrevealer) finds 279660 (or so) discrepancies and the interface is so glitchy after that I can't really figure out what is going on. The screen is squirrely. The rootkitrevealer pulls up many files in the folder \programdata\applicationdata and there are numberous appended \applicationdata on the end of that as well. -- As you can see, what we did was install MSE and MBAM and scan with both of them. Nothing but a tracking cookie. Then I took over and ran rootkitrevealer.exe from MicroSoft from a flash drive. It found a bunch of discrepancies, but only about 20 or so where security related, the rest being files that you just couldn't see from Windows Explorer. I couldn't see whether of not the files list above, the ones that the scan was hanging on, where in the list. The other thing is, I have no idea what to do about the things the scan comes up with. Then we checked the other computers and they do the same thing when you scan with Symantec. The people at the university seen to think that dad might not have a virus, but 2 of the computers slowed down noticably AND IE8 started acting all funny. None of my family is very computer oriented, and 2 of the possible causes for the rootkit are: -My dad bought a new flash drive, which shipped with a data security executable on it -My dad has to download lots of articles for his work Those are the only things that stand out, but it could have been anything. We are currently backing up our data, and I'll post again after trying IceSword 1.22. I just looked at my dad's forum topic, and someone recommended GMER. I'll try that too.

    Read the article

  • broadcom 5722 NIC not installed on Ubuntu Server, although driver present

    - by Bastien
    Hello, I just installed Ubuntu Server 10.04 LTS, running kernel 2.6.32-24-server, on a brand new Dell T110 server, supposedly fully compatible with Ubuntu Server. I have two NICs: one ONBOARD, the other additional on PCI. both of them are Broadcom netXtreme 5572. on the first boot of the system, I could see both cards as eth0 and eth1 (with ifconfig) I configured eth0 as static IP (as planned), and did not configure eth1. after rebooting, one of the two NICs "disappeared": it does not appear in ifconfig at all. the one that disappeared is the ONBOARD one. I investigated a bit and found the following things: the card is SEEN, but not "installed", it appears as "UNCLAIMED" in lshw: *-network UNCLAIMED description: Ethernet controller product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress cap_list configuration: latency=0 resources: memory:df9f0000-df9fffff *-network description: Ethernet interface product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 00 serial: 00:10:18:60:23:64 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.102 duplex=full firmware=5722-v3.09 ip=10.129.167.25 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:35 memory:dfaf0000-dfafffff so I checked my dmesg and found a few strange lines, showing, there actually is a problem bringing up this card: [ 3.737506] tg3: Could not obtain valid ethernet address, aborting. [ 3.737527] tg3 0000:04:00.0: PCI INT A disabled [ 3.737535] tg3: probe of 0000:04:00.0 failed with error -22 [ 3.737553] alloc irq_desc for 17 on node -1 [ 3.737555] alloc kstat_irqs on node -1 [ 3.737560] tg3 0000:05:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 3.737566] tg3 0000:05:00.0: setting latency timer to 64 [ 3.793529] eth0: Tigon3 [partno(BCM95722A2202G) rev a200] (PCI Express) MAC address 00:10:18:60:23:64 [ 3.793532] eth0: attached PHY is 5722/5756 (10/100/1000Base-T Ethernet) (WireSpeed[1]) [ 3.793534] eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 3.793536] eth0: dma_rwctrl[76180000] dma_mask[64-bit] that actually shows that one NIC is recognized, the other is not. I researched a bit more, with lspci -v: 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: fast devsel, IRQ 16 Memory at df9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-00-00-fe-ff-00-00-00 Kernel modules: tg3 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at dfaf0000 (64-bit, non-prefetchable) [size=64K] Expansion ROM at <ignored> [disabled] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 64-23-60-fe-ff-18-10-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3 Kernel modules: tg3 here I could see that the MAC address is 00-00-00-FE-FF-00-00-00, which, according to some forum posts on several websites, could be an issue. I've researched everything I could on the net, and found out several people having slightly comparable issues, but they usually involve different HW, and do not provide a proper explanation / solution... I would appreciate if anyone around here has some info to share ! thanks

    Read the article

  • Ubuntu server random hangups.

    - by Ebbe
    Hello all. this is my first post to this forum which I found through the superb podcast "It Conversations" from StackOverFlow. I am quite in my role as server administrator for an exhibition center in London. Basically we have a central file and sql server to which roughly 40 stations connects to to upload/download data used/captured by a set of applications. Over the last weeks we have experienced a few random hangups to our applications, and as it always happen to multiple applications simultaneously I do not believe that the applications are the source of the problem. We also monitor the network using Dartware Intermapper which indicates that all switches and stations on the network has been reachable during the downtime. Thus, its all pointing to the server. I have been looking through all log files I can think of and the only thing so far that I have found suspicious is the following lines in the syslog which are from the time of one of the hangups: Feb 6 17:14:27 es named[5582]: client 127.0.0.1#33721: RFC 1918 response from Internet for 150.0.168.192.in-addr.arpa Feb 6 17:14:40 es named[5582]: client 127.0.0.1#32899: RFC 1918 response from Internet for 152.0.168.192.in-addr.arpa Feb 6 17:15:01 es /USR/SBIN/CRON[1956]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:16:06 es /USR/SBIN/CRON[2031]: (es) CMD (/home/es/apps/es/bin/es_checksum.sh) Feb 6 17:21:00 es named[5582]: *** POKED TIMER *** Feb 6 17:21:00 es last message repeated 2 times Feb 6 17:21:07 es named[5582]: client 127.0.0.1#44194: RFC 1918 response from Internet for 143.0.168.192.in-addr.arpa Feb 6 17:21:12 es named[5582]: client 127.0.0.1#59004: RFC 1918 response from Internet for 164.0.168.192.in-addr.arpa I find a few lines of interesting lines here: 1) "RFC 1918 response from Internet for 150.1.168.192.in-addr.arpa". I see this a lot in the syslog. And basically everytime I do a nslookup for any of the computers in the cluster I get a new similar line in the syslog. I understand from google that this has to do with reverse lookup problems. But I do not know how that could effect the systems. Lets say that one of these lines appear every time one of the userstations connects to the server, which may happen several times a second. Could this possible cause a hangup of the entire server? 2) POKED TIMER, I have googled this quite a lot, but not found an explaination that I can relate to. What does this mean? 3) The timestamps, it seems like the entire server has stopped responding for several minutes. Normally there are many printouts to the syslog per minute on this server. Furthermore the CRON job is set to run once every minute. Which according to the log, hasent happened here. OS: Ubuntu 8.04 Kernel: Linux 2.6.24-24-server x86_64 GNU/Linux. Hardware: Dell R710, RAID1, CPU: 2x XEON E5530. 16GB Memory. Average load is very low, and memory should not be a problem. Please let me know if you need any additional information. Best wishes, Ebbe

    Read the article

  • Netgear VPN endpoint drops connectivity to single IP address

    - by Justin Bowers
    I'm having a strange issue with one of the networks I manage recently. We have about 14 different networks connected together through a Netgear hardware VPN. Everything has been running fine (other than standard connectivity problems) for a few years now, but I've hit a wall with a problem that's just cropped up at one of the VPN endpoint locations. Our primary VPN network is on the 192.168.1.0/24 subnet and our other 13 networks are on the 192.168.2.0/24 - 192.168.14.0/24 subnets. We run a terminal server on the 192.168.1.0/24 network with IP address 192.168.1.100. Starting Thursday of last week, we had a problem with connectivity of the 192.168.2.0/24 network to 192.168.1.100. When troubleshooting the problem, I found that Network 2 (192.168.2.0/24) still had connectivity to the Internet as well as VPN connectivity to Network 1 (192.168.1.0/24). We could ping and connect to any other device other than the server with IP address 192.168.1.100. Also, none of our networks had an issue accessing 192.168.1.100. I ran a scan on Network 2 after assigning static IP addresses to one of the workstations but received no response from 192.168.1.100 (looking for possibly a new device that someone had plugged into Network 2 that had a duplicate IP address with the server). Asking the staff, noone had reported connecting a new device to Network 2 as well. I then assigned a secondary IP address of 192.168.1.88 to the server and could ping and connect to the secondary IP address from Network 2, but still couldn't access it via 192.168.1.100. I then just rebooted the Netgear VPN Firewall (FVS318v3) and after it came back up, connectivity to 192.168.1.100 was restored. Beforehand, when checking for devices with a possible duplicate IP address, I did run a check for available wireless access points and stations and found none (our wireless is secured via MAC address access control through a WG102 device). I thought that it may have been a fluke for some reason since everything came back up after a power cycle of the VPN Firewall. Things ran fine for a few days until this afternoon, when the problem happened again. One of our users claimed that they had connectivity problems to the server and after connecting to the computer, I found that I couldn't ping the server address anymore. I could still ping the alternate IP address of the server though, so I went ahead and rebooted the VPN firewall again and connectivity was restored. Unfortunately, I can't find anything in the security or VPN logs of the firewall that helps point me in the right direction, so I thought I would go ahead and ask to see if anyone else has any other insight into why we've started having this problem. I am aware that it could still be a device with a duplicate IP address of the server on Network 2, but every employee claim states that there's been no such new device brought in to the network. I know this is a long read, but any help is appreciated! Thanks, Justin

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • openerp error openid module

    - by spy86
    I installed OpenERP server Centos 6.4. When I try to start the server with OpenERP module auth_openid I gets this error: [openerp@ bin]$ ./openerp-server --load=web,auth_openid 2013-10-22 13:02:18,705 22381 INFO ? openerp: OpenERP version 7.0 2013-10-22 13:02:18,705 22381 INFO ? openerp: addons paths: /opt/openerp/openerp-sr-preprod/current/server/openerp/addons 2013-10-22 13:02:18,705 22381 INFO ? openerp: database hostname: localhost 2013-10-22 13:02:18,705 22381 INFO ? openerp: database port: 5432 2013-10-22 13:02:18,705 22381 INFO ? openerp: database user: openerp 2013-10-22 13:02:18,706 22381 WARNING ? openerp.modules.module: module web: module not found 2013-10-22 13:02:18,707 22381 CRITICAL ? openerp.modules.module: Couldn't load module web 2013-10-22 13:02:18,707 22381 CRITICAL ? openerp.modules.module: No module named web 2013-10-22 13:02:18,707 22381 ERROR ? openerp.service: Failed to load server-wide module web. The web module is provided by the addons found in the openerp-web project. Maybe you forgot to add those addons in your addons_path configuration. Traceback (most recent call last): File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/init.py", line 60, in load_server_wide_modules openerp.modules.module.load_openerp_module(m) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 405, in load_openerp_module import('openerp.addons.' + module_name) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 132, in load_module f, path, descr = imp.find_module(module_part, ad_paths) ImportError: No module named web 2013-10-22 13:02:18,707 22381 WARNING ? openerp.modules.module: module auth_openid: module not found 2013-10-22 13:02:18,708 22381 CRITICAL ? openerp.modules.module: Couldn't load module auth_openid 2013-10-22 13:02:18,708 22381 CRITICAL ? openerp.modules.module: No module named auth_openid 2013-10-22 13:02:18,708 22381 ERROR ? openerp.service: Failed to load server-wide module auth_openid. Traceback (most recent call last): File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/init.py", line 60, in load_server_wide_modules openerp.modules.module.load_openerp_module(m) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 405, in load_openerp_module import('openerp.addons.' + module_name) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/modules/module.py", line 132, in load_module f, path, descr = imp.find_module(module_part, ad_paths) ImportError: No module named auth_openid 2013-10-22 13:02:18,713 22381 INFO ? openerp: OpenERP server is running, waiting for connections... Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 532, in bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 484, in run self.__target(*self.__args, **self.__kwargs) File "/opt/openerp/openerp-sr-preprod/current/server/openerp/service/wsgi_server.py", line 436, in serve httpd = werkzeug.serving.make_server(interface, port, application, threaded=True) File "/usr/lib/python2.6/site-packages/Werkzeug-0.7-py2.6.egg/werkzeug/serving.py", line 399, in make_server passthrough_errors, ssl_context) File "/usr/lib/python2.6/site-packages/Werkzeug-0.7-py2.6.egg/werkzeug/serving.py", line 331, in __init HTTPServer.init(self, (host, int(port)), handler) File "/usr/lib64/python2.6/SocketServer.py", line 402, in init self.server_bind() File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind SocketServer.TCPServer.server_bind(self) File "/usr/lib64/python2.6/SocketServer.py", line 413, in server_bind self.socket.bind(self.server_address) File "", line 1, in bind error: [Errno 98] Address already in use Anybody have some advice what's wrong ? Regards

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • MacBook Pro 10.6 losing dns service, network connection still functional if you know the ip address.

    - by Vincent
    MacBook pro connected to a wireless network (not sure about wired) I lose DNS. I still have a functioning connection and as long as I know the ip address of the website, server... for example skype works, ssh name@ipaddress, .... Things can be working properly and then just quit, Once I was im via skype and lost dns skype continued to work. This has happened in multiple locations on private and public networks. What does not work/fix it: Resetting router changing dns server on computer or router connecting to another network removing the airport interface and adding it back flushing dns The only solution seems to be a restart. A solution to this would be great, but any ideas of this to try would be great. Even a sure way to reproduce this would be useful. Maybe related question: But this is most definitely not true for me. "if I refresh enough -- 3 to 4 times --, it will usually pull up the site. " Here are some tests from terminal. Basically this confirms dns in not functioning vmd17:~ vmd$ ping google.com ping: cannot resolve google.com: Unknown host Trace route to google dns, This works vmd17:~ vmd$ /usr/sbin/traceroute -n -w 2 -q 2 -m 30 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 52 byte packets 1 192.168.1.1 5.195 ms 2.519 ms 2 67.172.136.1 31.881 ms 9.177 ms 3 68.85.107.121 12.168 ms 10.003 ms 4 68.86.103.41 12.021 ms 9.594 ms 5 68.86.91.1 16.712 ms 12.837 ms 6 68.86.86.210 29.951 ms 25.826 ms 7 68.86.87.218 29.554 ms 42.894 ms 8 75.149.231.70 68.271 ms 68.362 ms 9 72.14.233.77 141.178 ms 72.14.233.85 82.553 ms 10 72.14.238.243 83.381 ms 82.811 ms 11 72.14.232.213 194.387 ms 72.14.232.215 84.837 ms 12 209.85.253.145 100.294 ms * 13 8.8.8.8 101.689 ms 89.694 ms 208.67.222.22 is the ip address of opendns dns server vmd17:~ vmd$ dig @208.67.222.222 8.8.8.8 ; <<>> DiG 9.6.0-APPLE-P2 <<>> @208.67.222.222 8.8.8.8 ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached vmd17:~ vmd$ dig @208.67.222.222 gogle.com vmd17:~ vmd$ dig @208.67.222.222 google.com ; <<>> DiG 9.6.0-APPLE-P2 <<>> @208.67.222.222 google.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached vmd17:~ vmd$ dig @8.8.8.8 google.com ; <<>> DiG 9.6.0-APPLE-P2 <<>> @8.8.8.8 google.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • Apache sends plain-text response when accessing SSL-enabled site without HTTPS

    - by animuson
    I've never encountered something such as this before. I was attempting to simply redirect the page to the HTTPS version if it determined that HTTPS was off, but instead it's displaying an HTML page rather than actually redirecting; and even odder, it's displaying it as text/plain! The VirtualHost Declaration (Sort of): ServerAdmin [email protected] DocumentRoot "/path/to/files" ServerName example.com SSLEngine On SSLCertificateFile /etc/ssh/certify/example.com.crt SSLCertificateKeyFile /etc/ssh/certify/example.com.key SSLCertificateChainFile /etc/ssh/certify/sub.class1.server.ca.pem <Directory "/path/to/files/"> AllowOverride All Options +FollowSymLinks DirectoryIndex index.php Order allow,deny Allow from all </Directory> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule .* https://example.com:6161 [R=301] The Page Output: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I've tried moving the Rewrite stuff up above the SSL stuff hoping it'd do something and nothing happens. If I view the page with via HTTPS, it displays fine like it should. It's obviously detecting that I'm trying to rewrite the path, but it's not acting. The Apache error log does not indicate anything to me that might have gone wrong. When I remove the RewriteRules: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://example.com/"><b>https://example.com/</b></a></blockquote></p> <p>Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I get the standard "you can't do this because you're not using SSL" response, which is also provided in text/plain rather than being rendered as HTML. This would make sense, it should only work for HTTPS-enabled connections, but I still want to redirect them to the HTTPS connection when it determines that it is not enabled. Thinking I could circumvent the system: I tried adding a ErrorDocument 400 https://example.com:6161 to the config file instead of using RewriteRules, and that just gave me a new message, still no cheese. <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> How can I force Apache to actually redirect rather than displaying a "301" page that shows HTML in plain-text format?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >