Search Results

Search found 4835 results on 194 pages for 'svn dump'.

Page 157/194 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Windows errors, how do I find root cause and fix it? Getting several errors

    - by Eric Martin
    My server is having issues and not responding to customer's https requests. I checked the event viewer and found several errors. These two are listed a couple of times: WINS encountered a database error. This may or may not be a serious error. WINS will try to recover from it. You can check the database error events under 'Application Log' category of the Event Viewer for the Exchange Component, ESENT, source to find out more details about database errors. If you continue to see a large number of these errors consistently over time (a span of few hours), you may want to restore the WINS database from a backup. The error number is in the second DWORD of the data section. And this one: An error occured while using SSL configuration for socket address 0.0.0.0:444. The error status code is contained within the returned data. SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: xxx.xxx.xxxx.xxx] I also found this in the event viewer but the computer has been restarted since this message and I have not seen it again. Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory. This is my virtual memory settings: I'm not familiar with WINS so I wasn't sure if that is where I start or how to resolve it. Is the WINS error causing the other problems or should I be looking somewhere else?

    Read the article

  • Wrong Outlook anywhere settings

    - by Ken Guru
    Hey all I wanted to enable NTLM authentication on OutlookAnywhere, and after doing the command Set-OutlookAnywhere -IISAuthenticationMethods Basic,NTLM, my settings got changed. This is a dump before I run the command: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN= EXCAS01,CN=Servers,CN=Exchange Administrative Grou p (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Fi rst Organization,CN=Microsoft Exchange,CN=Services ,CN=Configuration,DC=asp,DC=ssc,DC=no Identity : EXCAS01\Rpc (Default Web Site) Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 05.01.2011 16:59:55 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : IsValid : True Noticde the settings for "Name", "DistinguishedName", and "Identity". After I run the command, I ended up with this: [PS] C:\Windows\system32Get-OutlookAnywhere ServerName : EXCAS01 SSLOffloading : False ExternalHostname : ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic, Ntlm} MetabasePath : IIS:///W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : EXCAS01 AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : EXCAS01 DistinguishedName : CN=EXCAS01,CN=HTTP,CN=Protocols,CN=EXCAS01,CN=Serv ers,CN=Exchange Administrative Group (FYDIBOHF23SP DLT),CN=Administrative Groups,CN=First Organizatio n,CN=Microsoft Exchange,CN=Services,CN=Configurati on,DC=asp,DC=ssc,DC=no Identity : EXCAS01\EXCAS01 Guid : 289b4865-caf1-4412-95ee-6fb0dff55e8b ObjectCategory : asp.ssc.no/Configuration/Schema/ms-Exch-Rpc-Http-V irtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtual Directory} WhenChanged : 06.01.2011 09:43:50 WhenCreated : 27.11.2009 11:20:12 OriginatingServer : ASP-DC-2. IsValid : True Now, the "Name", "DistinguishedName" and "Identity" has changed, and when I try to change it back by running "Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)", I get the following error: [PS] C:\Windows\system32Set-OutlookAnywhere -Identity "EXCAS01\Rpc (Default Web Site)" Set-OutlookAnywhere : The operation could not be performed because object 'EXCA S01\Rpc (Default Web Site)' could not be found on domain controller 'ASP-DC-2.'. Remember, the RPC over HTTP works fine with Basic authentication (even with the wrong settings), but NTLM still doesnt work. How do I change back the settings?

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Windows 7 fails to install

    - by Brian Ortiz
    I'm upgrading from Vista SP1 (which was actually upgraded from XP over a year ago) to Windows 7 RTM (64-bit Ultimate to 64-bit Ultimate). After 4 hours or so, the install fails with the message "This version of Windows could not be installed, Your previous version of Windows has been restored, and you can continue to use it." This error is back at my Vista desktop, there's no error that I could see during install, I just a message indicating that it was reverting everything. I tracked down the error logs and here's the log at I uploaded the error log (from C:\$WINDOWS.~BT\Sources\Panther) and uploaded onto Pastebin. Here is an excerpt: 2009-08-09 02:54:57, Error Number of Enumerated Devices = 21[gle=0x00000103] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] It was suggested that I upgrade to SP2 before upgrading to Vista, but this made no difference. I since uninstalled SP2 since it was creating some problems with a piece of hardware. I know a fresh install is best, but I'm hoping to avoid that because I'd need a new hard drive. Per Reuben's instruction, I found the install's dump and uploaded it here. (266 KB)

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Why do I often have to refresh pages I navigate to once for them (or content in them) to load?

    - by GetOutOfBox
    I have noticed a bizarre pattern when using my PC, that when I open a link to a website, it often will often take a very long time to load, or time out. Sometimes content on the website will be drawn, but again, it seems to get "stuck" for an unusual amount of time before finishing. Most affected is Youtube; almost every time I navigate to a youtube video from another website such as Google, the video will not begin playing, but will instead just display the player controls with a black screen where the video should be and the buffering symbol, usually before displaying an error such as "The video failed to load". The unusual part of this problem is that whenever this happens, refreshing the page always causes it to load almost immediately the second time around, without any problems. Note that I'm not talking about how some browsers will dump whatever has been cached to the "pallet" briefly when the page is refreshed or loading stopped; but that the second time loading the website being faster. I have done my best to rule out some of the obvious causes: My Windows 7 desktop computer is the only device that seems to be affected. I use Firefox on it (latest version, flash updated, etc). My connection has more than enough bandwidth (30 megabits down, 4 up), and I've even tried QoSing all other devices to make sure this isn't happening due to usage spikes. Wireshark is not showing any clearly unusual network activity (i.e frequently dropped packets).

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • How to recover the plesk database?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES

    Read the article

  • Best photo management software?

    - by Niels Basjes
    Hi, What I would like is a single piece of software (or a smart combination of tools) that allow me to manage my photos in a better way than what I've found so far. 1. Tags Primarily I need a way of tagging the images. So I can manually tag photos the same way we tag questions here at SO/SF/SU. I want this software to place a lot of the tags automagically (obvious things like date and resolution). 2. Face recognition What I would really like is that this software has a feature that it can recognize faces in images and places tags with the name of the person. So far I've only heard of one online photo system that can do that (Picasa) and not yet of any offline tool. 3. Version database I must have some way of having a central GIT/SVN/... that contains all images. I have had a harddrive corruption a few years ago and it took me a long time to figure out which images had been damaged. I always want to be able to go back to what the camera produced. 4. Website I want to be able to generate a website (few 'tag' specific websites) based on the actual content. 5. Easy bulk uploading Many photo tools have a one on one uploading option. I prefer simply 'throwing' my images on a file server under Linux (Samba) and let the system automagically integrate, tag, recognize, etc. all images. Ok, I know these are a bit much. Perhaps you guy's have some suggestions about existing tools that can make this possible. Or even a complete system that does this. EDIT: To clarify on the OS. I prefer Linux for any 'server' task and Windows XP for any 'desktop' task. Thanks for all your input. Niels Basjes

    Read the article

  • Can anyone explain these differences between two similar i7 processors? [closed]

    - by Brian Frost
    I have two systems I've just built. They both have i7 processors and Asus P8Z77 motherboards. When I run a simple processor loop benchmark that I wrote in Delphi some time back I get one machine showing nealry twice as fast as the other. I then used CPU-Z to dump me the details of the hardware and I see that the fast machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2700K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 3610.7 MHz Multiplier x FSB 36.0 x 100.3 MHz Stock frequency 3500 MHz the slow machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2600K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 1648.2 MHz Multiplier x FSB 16.0 x 103.0 MHz Stock frequency 3400 MHz i.e the slow machine has a 2600k to the fast machine 2700k. The very different "Multiplier x FSB" must be significant but I dont understand how two processors with a very 'similar' number can be so different. To get the machines the same must I copy the processors or is there some clever setting that I can change? Thanks for any help. Brian.

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • Building a PC, advice on SSD/Hybrid Hard Drives

    - by Jamie Hartnoll
    I am looking at building a new PC, it's mainly for office (graphics heavy) use and programming. Looking for good performance with opening and closing programs and files as well as a fast boot. I plan to have 3 primary hard drives Windows 7 Programs (photoshop etc) Current Files (There'll also be a large storage capacity back up drive, but this will be the Seagate drive I already have.) So, my question is, looking at standard "old fashioned" hard drives and SSD drives, obviously there's a massive price difference. I have been looking at drives like this: http://www.ebuyer.com/268693-corsair-120gb-force-3-ssd-cssd-f120gb3-bk-cssd-f120gb3-bk and this: http://www.ebuyer.com/321969-momentus-xt-750gb-sata-2-5in-7200rpm-hybrid-8gb-ssd-in-st750lx003 Having no experience of using either I don't know what's the most efficient thing to go for. Clearly the SSD will have better performance, but: If, for example, I had an SSD for Windows (say about 100gB), that would clearly give me the boot speed I want, then I guess my real questions are: If I were to buy one more SSD, would it give the greatest improvement on standard performance if used to store programs, or currently used files? Given that the OS is on an SSD, should I not bother with the 3 drives and instead, partition that Hybrid drive to store programs and currently used files on it? Obviously, option two is cheaper and option one could cause me storage issues, but that's when I can dump files I am not currently using onto another drive. Any, I am open to suggestions... so what do you suggest?!

    Read the article

  • SQL 2008 SP1 crashing almost daily

    - by matijake
    Hey, almost every day our new DB crashes. It is virtual server residing on same hardware as 5 other servers, two of them beeing identical MS SQL2008sp1 and two Oracle 11g's so I can pretty much rule out hardware issues. Server has dedicated local LUN, 4vCPU and 8GB memory with 2GB windows swap file. It runs 4 instances. Primary instance is limited to 5GB memory and paralelism set to 4 running on MS SQL 2008 SP1 @ Windows Server 2008 Enterprise R2 x64. Only that primary instance is crashing. After it crashes nothing can connect to it, it's even impossible to shut it down through service manager. What I found in logs is: ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\SQLDump0081.txt SqlDumpExceptionHandler: Process 4788 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server i s terminating this process.     Whole log can be seen at: http://kabl.org/files/SQLDump0081.txt second crash log made second later at: http://kabl.org/files/SQLDump0082.txt I have analyzed mini crashdump with Microsoft tools, but no promising results. If it can help, here it is: http://kabl.org/files/SQLDump0081.mdmp Any ideas are greatly welcome, since it is becoming quite a pain in the ass to restart server almost every day :) Regrads, -Matija

    Read the article

  • How can I know if a replacement display is compatible with my laptop?

    - by moraleida
    I have an Acer Aspire 4810TZ from 2010 which I truly love - it's a 3 year old notebook that still carries on for about 5 to 6h straight off the battery. I've also invested in it with an SSD and extra RAM and it's fine for my use right now. I'd really rather not just dump it. Now... its screen is dead and I'm looking for replacement parts, but it turns out to be a kind of expensive display - AUO B140XW02 14" LED. US$ 125 was the cheapest I found here (Rio de Janeiro, Brazil). The official dealer sells them for US$ 175. I'm aware of How do I know if a LCD is compatible with my Laptop? and Can I replace a laptop screen with one of a different resolution? but the answers don't really help me in this case. Also Replacing an LCD screen in a laptop gives me a hint but I'm completely dumb at hardware so this is why i'm asking: Is it possible to buy an LCD or a lower grade LED screen with the same connectors? What characteristics should I look for to find out if a certain display is compatible?

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • SQL Server Backup modes, and a huge log file

    - by Matt Dawdy
    Okay, I'm not a server administrator, a network guy, or a DBA. I'm merely a programmer helping out a small company. They have IT guy who isn't MS centric (most stuff is on Mac) and he and I are trying to figure out a solution here. We've got 1 main database. We run nightly full backups. I know they are full backups because I can take the latest file, or any of the daily backups, and go to a completely new machine and "restore" the backup to an empty database and our app runs perfectly fine off of this backup. The backups have grown from 60 MB to 250MB over 4 months. When running, then log file is 1.7 GB, and the data file is only 200-300 MB. Yes, recovery model is set to full. So, my question, after all of that, if we are keeping daily backups, and we don't have the need / aren't smart enough to roll the DB back to a certain time, if I change the recovery mode to simple, am I really losing anything? And, if I do change it to simple, will it completely dump the log file or at least reduce it way the hell down? And, will that make our database run faster? I know that it'll make my life easier when I copy a relatively recent backup to my local machine to do development and testing...

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >