Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 285/320 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Python install issue on Mac OS X

    - by Michael Waterfall
    I have been using the standard python that comes with OS X Lion (2.7.2) but I wanted to build a UCS-4 version to handle 4-byte unicode characters better. I had already installed pip and packages like pytz, virtualenv and virtualenvwrapper, etc., and these are installed in /Library/Python/2.7/site-packages. My $PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. To build a new version of python on the machine (outside of any project specific virtual environments, that will come later), I followed the instructions on this article and managed to build it in /usr/local/bin. The problem is that when I launched a new bash window, I got the following virtualenvwrapper error: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named virtualenvwrapper.hook_loader virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python and that PATH is set properly. The instructions said to move /usr/local/bin to the top of the /etc/paths file, and since then I've noticed some strange issues. I installed pip into /usr/local/bin and now I have assumed that since I'm working in /usr/local/bin, and the newly installed python's site packages is now located in /usr/local/lib/python2.7/site-packages, when I do pip freeze, it should be empty as nothing is installed there yet. However, pip freeze still reports things installed in the old (OS X) site-packages folder. Here's some info after the build: $ which python /usr/local/bin/python $ which pip /usr/local/bin/pip $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin When I uninstall a python package with pip, it removes it from the old site-packages folder as expected. When I install it again, instead of installing it in /usr/local/lib/python2.7/site-packages, it installs it in /Library/Python/2.7/site-packages (verified by attempting to install it again and receiving Requirement already satisfied (use --upgrade to upgrade): pytz in /Library/Python/2.7/site-packages ). How is it getting that path for the old site-packages folder? Why won't it install it in the correct location for the python install it's using? I'm getting several other issues since promoting /usr/local/bin but I think if I understand this I'll be able to get somewhere. Can anyone see what's happening? If you need any more info I'll be happy to provide it.

    Read the article

  • installing windows XP in Samsung SENS 145 plus notebook (no CD drive)

    - by user13267
    Hi I was trying to install Windows XP in a Samsung SENS 145 plus Notebook. It does not have a cd drive and I already managed to format it and semi install Windows XP, so now it does not even boot up either. This is what I did: Since it supports USB booting, I first made a bootable USB of Windows XP (Korean version; SP2 I think, may be SP 3) using Novicorp WinToFlash enter link description here. It managed to boot up at first and I was able to format the C driveand get Windows install to start up. It took forever to copy all the files from the USB and after the first reboot, before installation started, I cancelled the reboot from windows install, went to BIOS and changed the boot device priority from USB to internal hard drive. But now on bootup it showed me a list with two options for booting windows XP (much like in the case of a multi OS system) so I assumed that I had formatted drive D by mistake and installed XP there, instead of on C drive. Anyway, I chose one of them and it continued my Windows installation. I got the blue installation screen that shows ads about Windows XP on the right frame and estimated remaining time on the left. However, after completing the process, after the first reboot, instead of showing the Windows XP logo, it says \system32\hall.dll is missing (or corrupted I'm not sure, I needed to install the Korean version of windows and I could not exactly read the error message, however it was one that I have already seen in an English version installation, and I am sure it says either missing or corrupted). The problem is, now it shows the same error again when I try to reboot it from the USB drive as well. I tried to boot a portable version of Linux I made in another USB, but the computer does not boot up from that USB, and it shows hal.dll error when I try to boot it using the WIN XP installation USB I made, as well as when I try to boot it from the hard drive, where I suppose Win XP is now semiinstalled. So now I can't get the computer to start up at all, except going to the BIOS. What else can I try to solve this? Also, would it be possible to install XP on this computer by connecting it to another one running Windows 7 ultimate, through the ethernet card? That is, network just the two computers together, then install windows XP on the notebook from the desktop running windows 7? Please help, I'm running out of ideas on this one. If Korean version of windows XP is the problem then I am willing to install English version as well. (but I need to make sure if that is the real cause of the problem)

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • Perl missing while installing nginx on centos

    - by Ahoura Ghotbi
    I am trying to install nginx on my server, however it keeps returning "./configure: error: perl 5.6.1 or higher is required" eventhough I have perl v5.8.8!!!! I have already downloaded perl and trying to configure it using the following command : ./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming here is the output : [root@fst nginx-0.8.55]# ./configure --with-http_stub_status_module --with-http_perl_module --with-http_flv_module --add-module=nginx_mod_h264_streaming checking for OS + Linux 2.6.18-308.el5 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-52) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found configuring additional modules adding module in nginx_mod_h264_streaming + ngx_http_h264_streaming_module was configured checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for zlib library ... found checking for perl + perl version: v5.8.8 built for x86_64-linux-thread-multi ./configure: error: perl 5.6.1 or higher is required

    Read the article

  • How to make Horde connect to mysql with UTF-8 character set?

    - by jkj
    How to tell horde 3.3.11 to use UTF-8 for it's mysql connection? The $conf['sql']['charset'] only tells horde what is expected from the database. Horde uses MDB2 to connect to mysql. Is there way to force MDB2 or mysql character_set_client from php.ini? So far I found two workarounds: Force mysql to ignore character set requested by client [mysqld] skip-character-set-client-handshake=1 default-character-set=utf8 Force mysql to run SET NAMES utf8 on every connection [mysqld] init-connect='SET NAMES utf8' Both have drawbacks on multi user mysql server. The first disables converting character sets alltogether and the second one forces every connection to produce UTF-8. [EDIT] Found the problem. The 'charset' parameter was unset the last minute before sending to SQL backend. This is probably due to mysql not being able to digest utf-8 but utf8. Mysql specific mapping is required to make it work. I just worked around it by translating utf-8 - utf8. Won't work with any other databases with this patch though. --- lib/Horde/Share/sql.php.orig 2011-07-04 17:09:33.349334890 +0300 +++ lib/Horde/Share/sql.php 2011-07-04 17:11:06.238636462 +0300 @@ -753,7 +753,13 @@ /* Connect to the sql server using the supplied parameters. */ require_once 'MDB2.php'; $params = $this->_params; - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_write_db = &MDB2::factory($params); if (is_a($this->_write_db, 'PEAR_Error')) { Horde::fatal($this->_write_db, __FILE__, __LINE__); @@ -792,7 +798,13 @@ /* Check if we need to set up the read DB connection seperately. */ if (!empty($this->_params['splitread'])) { $params = array_merge($params, $this->_params['read']); - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_db = &MDB2::singleton($params); if (is_a($this->_db, 'PEAR_Error')) { Horde::fatal($this->_db, __FILE__, __LINE__);

    Read the article

  • What PHP, Xdebug and Eclipse configurations work on Windows 7 64 bit?

    - by thaddeusmt
    I have been mucking around for days, trying to find the right combination that lets me debug with breakpoints and variable viewing, in Eclipse, without crashing Apache. PHP 5.3? PHP 5.2? Eclipse Helios? Eclipse Galileo? One or the other with certain versions of xdebug or php? Or do I really need to use NetBeans or something else? Is my 64 bit OS the problem? Do need specific 64bit versions of PHP, Eclipse or Xdebug to work on Windows 7 64? Any special xdebug config options and tricks that I need in php.ini? Like turning off xdebug.profiler_enable or not using quotes around my zend_extension path to the xdebug dll? A Vhosts issue? Scrap the whole thing and go back to Win XP or Ubuntu? Here's what I've already been reading: http://stackoverflow.com/questions/4509245/so-eclipse-and-xdebug-walk-into-a-bar-and-then-my-apache-server-dies/4602473 http://stackoverflow.com/questions/206788/why-does-xdebug-crash-apache-on-every-xampp-install-ive-tried http://bugs.xdebug.org/view.php?id=459 https://bugs.eclipse.org/bugs/show_bug.cgi?id=312951#c8 http://stackoverflow.com/questions/2799936/xdebug-for-php-5-2-on-windows-7-64bit and so and so on... SO, xdebug bug tracker, eclipse bugzilla, etc, etc Basically what would be great is if folks could post their working (i.e. debugging with breakpoints and local variable viewing in Eclipse) Win7 64bit configurations, including: PHP version (5.3.1, 5.2.11, etc) Xdebug dll (2.1.0-5.3-vc6, etc) Xdebug php.ini config (zend_extension = "C:\xampp\php\ext\php_xdebug.dll", etc) Apache version (2.2.14, etc) Eclipse version Anything else important? The "secret ingredient"? Thanks! I miss my debugger since I got a new laptop with Win 7! Sadly it looks like some of the drivers (switchable graphics, multi-touch pad, etc) on my lappy don't work right with Ubuntu yet, so I feel a bit trapped on Win :( I know I will figure something out eventually, but I've been at this trial-and-error game a while and am seeking some guidance. (Originally posted on StackOverflow here, but moved to SuperUser:) http://stackoverflow.com/questions/4628215/what-php-xdebug-and-eclipse-configurations-work-on-windows-7-64-bit

    Read the article

  • Latency issues over internet

    - by Stevo
    I have a Media Temple server running http://www.popsapp.com which I am having latency issues with. If I run ab -n 100 -c 10 http://www.popsapp.com/ from my local machine I get very bad stats e.g.: Connection Times (ms) min mean[+/-sd] median max Connect: 179 3375 2185.4 2837 12525 Processing: 0 505 693.3 229 4564 Waiting: 0 50 115.4 0 415 Total: 964 3880 2094.5 3159 12608 Whereas if I run it from a rackspace server I have I get this: Connection Times (ms) min mean[+/-sd] median max Connect: 75 76 3.3 75 84 Processing: 235 339 81.4 315 579 Waiting: 159 249 61.7 234 411 Total: 311 415 82.0 390 663 To me this looks like intermediate network issues, but I wouldn't have thought it could be this bad! Any ideas how I can improve it? Here's the trace route traceroute to www.popsapp.com (216.70.105.183), 64 hops max, 52 byte packets 1 192.168.2.1 (192.168.2.1) 3.738 ms 0.953 ms 1.418 ms 2 host-92-22-112-1.as13285.net (92.22.112.1) 27.409 ms 97.093 ms 78.858 ms 3 host-78-151-225-141.static.as13285.net (78.151.225.141) 61.830 ms 170.484 ms 113.288 ms 4 host-78-151-225-80.static.as13285.net (78.151.225.80) 101.513 ms host-78-151-225-22.static.as13285.net (78.151.225.22) 64.718 ms 47.309 ms 5 xe-11-1-0-rt001.sov.as13285.net (62.24.240.14) 98.381 ms 114.424 ms xe-11-1-0-rt001.the.as13285.net (62.24.240.6) 96.592 ms 6 host-78-144-1-59.as13285.net (78.144.1.59) 36.799 ms host-78-144-1-63.as13285.net (78.144.1.63) 178.426 ms host-78-144-1-61.as13285.net (78.144.1.61) 85.516 ms 7 xe-10-0-0-scr010.thn.as13285.net (78.144.0.224) 88.158 ms host-78-144-0-207.as13285.net (78.144.0.207) 35.132 ms host-78-144-0-153.as13285.net (78.144.0.153) 121.464 ms 8 limelight-pp-thn.as13285.net (78.144.3.6) 46.987 ms limelight-pp-sov.as13285.net (78.144.5.18) 108.025 ms 40.169 ms 9 tge11-1.fr4.lga.llnw.net (69.28.172.149) 109.603 ms ve6.fr4.lon.llnw.net (68.142.88.221) 121.681 ms 38.609 ms 10 tge11-1.fr4.lga.llnw.net (69.28.172.149) 111.981 ms 113.744 ms 111.711 ms 11 tge8-2.fr4.iad.llnw.net (69.28.189.34) 117.102 ms ve5.fr4.iad.llnw.net (69.28.171.214) 184.372 ms 146.178 ms 12 cr02-1-1.iad1.net2ez.com (65.97.48.254) 182.880 ms net2ez.tge2-2.fr4.iad.llnw.net (69.28.156.170) 150.489 ms 121.862 ms 13 65.97.50.26 (65.97.50.26) 184.620 ms cr02-1-1.iad1.net2ez.com (65.97.48.254) 156.136 ms 131.963 ms 14 65.97.50.26 (65.97.50.26) 124.899 ms 126.537 ms 123.322 ms 15 e1.4.as02.iad01.mtsvc.net (70.32.64.246) 134.647 ms 186.307 ms 211.059 ms 16 popsapp.com (216.70.105.183) 118.876 ms 113.189 ms vzx258.mediatemple.net (216.70.104.17) 131.012 ms Looks to me like there is significant delay across the limelight network. This would explain why the traceroute via my rackspace server doesn't suffer from the same delay as they will be using their own trunk.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Hibernating and booting into another OS: will my filesystems be corrupted?

    - by Ryan Thompson
    Suppose I have Windows and Linux installed on the same computer. If I hibernate Windows, can I boot into Linux without corrupting the Windows filesystem when I resume Windows? What about the other way around? What if I hibernate one, boot into the other, and mount the hibernated filesystem read/write? Read-only? If this is unsafe, is there any way to detect the hibernated state of the other OS and prevent mounting its filesystem? Basically, how far can I push this before it breaks, and how dangerous is it near the edge? I think I know the answers to some of the above questions, but for other ones, I have no idea, and for obvious reasons I have not tested this on my own computer. If someone has tested these, please enlighten the rest of us. I'm not necessarily looking for a specific answer to every question; I'll accept any response that answers a reasonable portion. EDIT: Let me clarify that when I say "hibernate," I mean the process of writing the contents of RAM to the hard disk and completely powering down the computer. In this state, powering the computer back on brings you through the BIOS and bootloader again, and you could theoretically select another operating system on a multi-boot system. Anyway, on with the original question: RESULTS Ok, after everyone's assurances that this would work, I tested it for myself. I set up Ubuntu to remount all ntfs filesystems and external drives read-only before hibernating. There was no need for a similar Windows setup because Windows does not read Linux filesystems. Then, I tried alternately hibernating one operating system and resuming the other, back and forth a few times. I even tried mounting the Windows filesystem from Ubuntu read-write, and creating a few files. Windows didn't complain when I resumed. So, in conclusion, you can more or less freely hibernate in a dual-boot Windows/Linux scenario. Note that I did not test a dual Linux/Linux co-hibernation situation. If you have two or more Linux installs and you hibernate one of them, you might be able to corrupt the filesystem by mounting it from another.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • USB Flash not recognised by Windows and BIOS, but works fine in Linux

    - by bbalegere
    I have a Transcend JetFLash 2GB USB Drive.It was working fine and I had been using it occasionally. All of sudden it stopped working in all versions of Windows . The USB Drive is also not recognised by the BIOS.It does not show in the list of bootable devices.(It used show up in the list earlier) However the USB Drive works fine in my Linux Mint 11 OS. Running dmesg gives this [ 941.812192] usb 1-2: new high speed USB device using ehci_hcd and address 4 [ 941.936178] usb 1-2: device descriptor read/64, error -71 [ 942.164188] usb 1-2: device descriptor read/64, error -71 [ 942.380189] usb 1-2: new high speed USB device using ehci_hcd and address 5 [ 942.504138] usb 1-2: device descriptor read/64, error -71 [ 942.732179] usb 1-2: device descriptor read/64, error -71 [ 942.948154] usb 1-2: new high speed USB device using ehci_hcd and address 6 [ 943.364134] usb 1-2: device not accepting address 6, error -71 [ 943.476172] usb 1-2: new high speed USB device using ehci_hcd and address 7 [ 943.892140] usb 1-2: device not accepting address 7, error -71 [ 943.892191] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 944.296190] usb 2-2: new full speed USB device using uhci_hcd and address 3 [ 944.438251] usb 2-2: not running at top speed; connect to a high speed hub [ 944.709928] usbcore: registered new interface driver uas [ 944.729999] Initializing USB Mass Storage driver... [ 944.730509] scsi6 : usb-storage 2-2:1.0 [ 944.730908] usbcore: registered new interface driver usb-storage [ 944.730917] USB Mass Storage support registered. [ 945.736320] scsi 6:0:0:0: Direct-Access JetFlash Transcend 2GB 8.07 PQ: 0 ANSI: 2 [ 945.744547] sd 6:0:0:0: Attached scsi generic sg1 type 0 [ 945.753316] sd 6:0:0:0: [sdb] 3944448 512-byte logical blocks: (2.01 GB/1.88 GiB) [ 945.758274] sd 6:0:0:0: [sdb] Write Protect is off [ 945.758288] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [ 945.765167] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.765181] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 945.784309] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.784323] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.239512] sdb: sdb1 [ 946.257279] sd 6:0:0:0: [sdb] No Caching mode page present [ 946.257292] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.257302] sd 6:0:0:0: [sdb] Attached SCSI removable disk Looks like there is something wrong the USB Drive.It is not recognised in any computer running Windows. Is there any way to fix this? Any idea why this problem occurred ?

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • Why is the server performance so poor? What can be done to improve the speed of the server?

    - by fslsyed
    Very slow processing using Windows Server2008 R2 Standard with Service Pack One. Situation: Read a text file using the text data to populate a series of MS Sql tables. The converted data is used to generate monthly PDF invoice files; the PDF files are saved directly to the hard drive. The application is multi-threading with one thread used for the text conversion and three threads for PDF invoice generation. The text conversion is occurring concurrently with the invoice generation. Application Software: C# using Microsoft Visual Studio 2010 Ultimate. Crystal Report Writer 2011 with runtime 13_0_3 64 bit version. Targeted platform is x64; also tested as x86, and Any CPU with similar results. Microsoft .NET Framework 4.0. Microsoft Sql 2008 Issue: The software is running very slowly. The conversion of the text file is approximately six hundred fifty records per second and generation of the PDF files is approximately twelve invoices per minute. The text file to be converted is six hundred Meg with seven thousand invoices to be generated. The software was installed on three different machines from the same distribution files. The same text file was converted on each machine. The user executing the application was an administrator on each machine. The only variances were the machine and operating system. The configurations are as follows: Server: Operating System: Windows Server2008 R2 Standard 64-bit (6.1, Build7601) SP1 Service Pack: System Manufacturer: IBM System Model: System x3550 M3-[7944AC1]- BIOS: Default System BIOS Processor: Intel® Xeon® CPU E5620@ 2.4GHz (16 CPUs) Memory: 16384MB Notebook: Operating System: Windows 7 Home Premium Standard 64-bit (6.1, Build7601) System Manufacturer: Hewlett-Packard System Model: HP Pavilion dv7 Notebook PC BIOS: Default System BIOS Processor: AMD Phenom II N640 Dual-Core Processor 2.9GHz (2 CPUs) Memory: 6144MB Desktop: Operating System: Windows 7 Professional 64-bit (6.1, Build7601) SP1 System Manufacturer: Dell Inc. System Model: OptiPlex 960 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A11 Processor: Intel Core™2 Quad CPU Q9650 @3.00GHZ (4 CPUs) Memory: 16384MB Processing results per machine: The applications were executed seven times with the averages being displayed below. Machine Text Records Invoices Generated Converted Per Minute Per Minute Server (1) 650 12 Notebook 980 17 Desktop 2,100 45 (1) The server is dedicated to execution of this application; no additional applications are being executed. Question: Why is the server performance so poor? What can be done to improve the speed of the server?

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on?

    Read the article

  • Apache Alias subfolder and starting with dot

    - by MauricioOtta
    I have a multi purpose server running ArchLinux that currently serves multiple virtual hosts from /var/www/domains/EXAMPLE.COM/html /var/www/domains/EXAMPLE2.COM/html I deploy those websites (mostly using Kohana framework) using a Jenkins job by checking out the project, removes the .git folder and ssh-copy the tar.gz to /var/www/domains/ on the server and untars it. Since I don't want to have to re-install phpMyAdmin after each deploy, I decided to use an alias. I would like the alias to be something like /.tools/phpMyAdmin/ so I could have more "tools" later if I wanted to. I have tried just changing the default httpd-phpmyadmin.conf that was installed by following the official WIKI: https://wiki.archlinux.org/index.php/Phpmyadmin Alias /.tools/phpMyAdmin/ "/usr/share/webapps/phpMyAdmin" <Directory "/usr/share/webapps/phpMyAdmin"> AllowOverride All Options FollowSymlinks Order allow,deny Allow from all php_admin_value open_basedir "/var/www/:/tmp/:/usr/share/webapps/:/etc/webapps:/usr/share/pear/" </Directory> Changing only that, doesn't seem to work with my current setup on the server, and apache forwards the request to the framework which 404s (as there's no route to handle /.tools/phpAdmin). I have Mass Virtual hosting enable and setup like this: # # Use name-based virtual hosting. # NameVirtualHost *:8000 # get the server name from the Host: header UseCanonicalName On # splittable logs LogFormat "%{Host}i %h %l %u %t \"%r\" %s %b" vcommon CustomLog logs/access_log vcommon <Directory /var/www/domains> # ExecCGI is needed here because we can't force # CGI execution in the way that ScriptAlias does Options FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> RewriteEngine On # a ServerName derived from a Host: header may be any case at all RewriteMap lowercase int:tolower ## deal with normal documents first: # allow Alias /icons/ to work - repeat for other aliases RewriteCond %{REQUEST_URI} !^/icons/ # allow CGIs to work RewriteCond %{REQUEST_URI} !^/cgi-bin/ # do the magic RewriteCond %{SERVER_NAME} ^(www\.|)(.*) RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%2}/html/$1 ## and now deal with CGIs - we have to force a MIME type RewriteCond %{REQUEST_URI} ^/cgi-bin/ RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%{SERVER_NAME}}/cgi-bin/$1 [T=application/x-httpd-cgi] There is also nginx running on this server on port 80 as a reverse proxy for Apache: location ~ \.php$ { proxy_pass http://127.0.0.1:8000; } Everything else was setup by following the official WIKI so I don't think those would cause trouble. Do I need to have the alias for phpMyAdmin setup along the mass virtual hosting or can it be in a separate include file for that alias to work?

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • 10.6.4 Apple Wiki: New just created users can do nothing?

    - by beefon
    Hello, After update to 10.6.4 there's an issue: any new users that I create in Server Prefs/WGM can't post to their blogs, comment, create wiki pages... They can't do anything! There's log from Wiki errors (when user DURAK tries to create new blog entry): [HTTPChannel,5,127.0.0.1] Traceback (most recent call last): [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 126, in process self.render(resrc) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 133, in render body = resrc.render(self) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/WebAppServer.py", line 90, in render d = defer.maybeDeferred(function, *args) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/internet/defer.py", line 104, in maybeDeferred result = f(*args, **kw) [HTTPChannel,5,127.0.0.1] --- <exception caught here> --- [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/ContentServiceBase.py", line 121, in xmlrpc_addEntry aPage = ContentEntry.newBundleBasedContentEntry (path = path, content = content, author = author, title = title, uid = uid, type = kind, versioned = self.versioned, templateName = template) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 794, in newBundleBasedContentEntry aPage.save('First created', 'created') [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 445, in save revisions.addRevision(self.serializeEntry(revisionAttributes), inComment = comment, inAuthor = updateAuthor, inChangeType = editType) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 36, in _func result = f(self, *args, **kwargs) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 49, in addRevision contentPlistStr = plistlib.writePlistToString(inContentDict).decode("utf-8") [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 110, in writePlistToString [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 94, in writePlist [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 251, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 280, in writeDict [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 238, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 171, in simpleElement [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 221, in _escapeAndEncode [HTTPChannel,5,127.0.0.1] exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128) [HTTPChannel,5,127.0.0.1] 'Unparseable html in page, removing whatever was already written.' [HTTPChannel,5,127.0.0.1] Removing /Library/Collaboration/Users/durak/weblog/27133.page Any "old" user CAN create, modify, comment, etc. What can you recommend to fix this issue? Hope for your help...

    Read the article

  • How can I reverse mouse movement (X & Y axis) system-wide? (Win 7 x64)

    - by Scivitri
    Short Version I'm looking for a way to reverse the X and Y mouse axis movements. The computer is running Windows 7, x64 and Logitech SetPoint 6.32. I would like a system-level, permanent fix; such as a mouse driver modification or a registry tweak. Does anyone know of a solid way of implementing this, or how to find the registry values to change this? I'll settle quite happily for how to enable the orientation feature in SetPoint 6.32 for mice as well as trackballs. Long Version People seem never to understand why I would want this, and I commonly hear "just use the mouse right-side up!" advice. Dyslexia is not something which can be cured by "just reading things right." While I appreciate the attempts to help, I'm hoping some background may help people understand. I have a user with an unusual form of dyslexia, for whom mouse movements are backward. If she wants to move her cursor left, she will move the mouse right. If she wants the cursor to move up, she'll move the mouse down. She used to hold her mouse upside-down, which makes sophisticated clicking difficult, is terrible for ergonomics, and makes multi-button mice completely useless. In olden times, mouse drivers included an orientation feature (typically a hot-air balloon you dragged upward to set the mouse movement orientation) which could be used to set the relationship between mouse movement and cursor movement. Several years ago, mouse drivers were "improved" and this feature has since been limited to trackballs. After losing the orientation feature she went back to upside-down mousing for a bit, until finding UberOptions, a tweak for Logitech SetPoint, which would enable all features for all pointing devices. This included the orientation feature. And there was much rejoicing. Now her mouse has died, and current Logitech mice require a newer version of SetPoint for which UberOptions has not been updated. We've also seen MAF-Mouse (the developer indicated the version for 64-bit Windows does not support USB mice, yet) and Sakasa (while it works, commentary on the web indicate it tends to break randomly and often. It's also just a running program, so not system-wide.). I have seen some very sophisticated registry hacks. For example, I used to use a hack which would change the codes created by the F1-F12 keys when the F-Lock key was invented and defaulted to screwing my keyboard up. I'm hoping there's a way to flip X and Y in the registry; or some other, similar, system-level tweak out there. Another solution could be re-enabling the orientation feature for mice, as well as trackballs. It's very frustrating that input device drivers include the functionality we desperately need for an accessibilty concern, but it's been disabled in the name of making the drivers more idiot-proof.

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • Software Raid 10 corrupted superblock after dual disk failure, how do I recover it?

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2, even if I assemble manually with all the device names. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris All 6 disks have the same layout, partition #1 is for raid 1 /boot, partition #2 is for raid 10 far plan, partition #3 is swap, but sda did not have swap enabled EDIT2: This is the output of mdadm --detail /dev/md1 Layout : near=1, far=2 Chunk Size : 64k UUID : a0feff55:2018f8ff:e368bf24:bd0fce41 Events : 0.3112126 Number Major Minor RaidDevice State 0 8 34 0 spare rebuilding /dev/sdc2 1 0 0 1 removed 2 8 18 2 active sync /dev/sdb2 3 8 50 3 active sync /dev/sdd2 4 0 0 4 removed 5 8 82 5 active sync /dev/sdf2 6 8 66 - spare /dev/sde2 EDIT3: I ran ddrescue and it has copied everything from sda except a single 4096 byte sector that I suspect is the raid superblock EDIT4: Here is some more info too long to fit here lshw: http://pastebin.com/2eKrh7nF mdadm --detail /dev/sd[abcdef]1 (raid1): http://pastebin.com/cgMQWerS mdadm --detail /dev/sd[abcdef]2 (raid10): http://pastebin.com/V5dtcGPF dumpe2fs of /dev/sda2 (from the ddrescue cloned drive): http://pastebin.com/sp0GYcJG I tried to recreate md1 based on this info with the command mdadm --create /dev/md1 -v --assume-clean --level=10 --raid-devices=6 --chunk=64K --layout=f2 /dev/sda2 missing /dev/sdc2 /dev/sdd2 missing /dev/sdf2 But I can't mount it, I also tried to recreate it based on my initial mdadm --detail /dev/md1 but it still doesn't mount It also warns me that /dev/sda2 is an ext2fs file system but I guess its because of ddrescue

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >