Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 749/2753 | < Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >

  • How to play 24 fps video smoothly on a 60Hz display? (or which player supports frame interpolation?)

    - by netvope
    I use mpc-hc to play videos on Win7 x64. With the default settings (#1), video playback is great most of the time. But for panning shots, playback is not smooth. I stepped through the video frame by frame and found that the panning movement is smooth (e.g. each frame shifts horizontally by 10 pixels), so the problem is how the 23.976 fps video is interpolated to 60Hz. The judder looks like what would be caused by a "2:3 pulldown", where the frames are played unevenly like: frame 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, etc (#2) Using "optimal renderer settings" (#3) instead of the default disables the Aero theme and causes tearing. Setting my LCD display to 50Hz may have improved the judder slightly (but I can't really tell). My display does not support 24Hz or 48Hz, and forcing them in the Nvidia control panel gives blurry screen. I've tried other video players (VLC and KMPlayer), the ReClock Directshow Filter, video files from different sources (#4), turning on/off DXVA, and a computer with a different GPU, but the judder in the playback is similar. None of them solved the problem. So, how can I play 23.976 or 24 fps video smoothly on a 60Hz display? I think a video player could make the video smoother by doing linear interpolation, such as: 1. 100% frame 1 2. 60% frame 1 + 40% frame 2 3. 20% frame 1 + 80% frame 2 4. 80% frame 2 + 20% frame 3 5. 40% frame 2 + 60% frame 3 6. 100% frame 3 7. 60% frame 3 + 40% frame 4 .. etc Can any existing video player do this? Footnotes: (#1) Video renderer: EVR Custom Pres. (#2) This example converts a 24 fps video into 30 fps (#3) View Renderer settings Reset Reset to optimal renderer settings (#4) The files I have are all H.264 mkv files, but I don't think the file format/encoding matters.

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • Driver denied access to PCI card

    - by Corin
    Alright, I asked this on StackOverflow (here) and they suggested trying ServerFault to get help on permissions. So here's the deal. We designed a custom PCI card and wrote the driver for it. It's been working for years without problems but now we encountered one particular installation were it doesn't work. The problem is that we cannot connect to the PCI to begin communication with it. We tried replacing the card and had the same problem. We had the motherboard replaced thinking the PCI slots were bad. That didn't help either. We tried the cards in a different computer and they all worked. So it seemed to be something specific to the computer. The Windows Device Manager indicates the device is working properly and seems to have all the correct driver info. We now have this troublesome computer back at the office for testing. With the help of some extra debug info in the driver we determined that we cannot connect because access is denied. Sounds like a permissions issue to me. I should note that we are logged into the system as a local administrator. So what configuration option in Windows can prevent access to a device?

    Read the article

  • Configuration of Server root email - Change Address and Name on outgoing email

    - by JTWOOD
    As a newbie Postfix user, I've gotten so far and now I am stuck with a SMALL problem. I would like to configure my local network servers to send alerts and like using the following: 1) From address: [email protected] 2) From name: Hostname I can get #1 to work fine using smtp_generic_maps The problem is that on my email client, the name is listed as "root" - as in the header shows the following: Date: Sun, 29 Jul 2012 13:21:01 -0400 (EDT) From: [email protected] (root) To: undisclosed-recipients:; I'd like to change it to "From: [email protected] (Zeus)" I imagine that this can be done in the headers_check, but so far I haven't gotten anything to work and before I waste a ton of time trying to get this to work, I'd like to make sure I am on the right track. My aliasing and genericmaps are set up correctly (As far as I can see and know - the results are correct!). I just want to change that last bit in the From field to reflect the hostname. I would also like to add something in the subject of the outgoing messages for easy filtering - something like Subject: [Zeus.domain] - "Original Subject" Any suggestions are much appreciated. Thanks!

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Browsing \\computer\share fails, but net use \\computer\share works?

    - by JMD
    I've had mixed results with using Windows Explorer to browse remote file shares. The setup: I'm at work on Windows XP SP3 Files are at home on Windows XP SP3 Two separate VPNs are available to access my PC at home corporate OpenVPN (10.1.2.3) a Hamachi/LogMeIn connection (5.1.2.3) With respect to my problem, it doesn't matter which IP I use. They both perform exactly the same way: I expect that if I open Windows Explorer and type in \\10.1.2.3\Shared I should be interrupted with a challenge for credentials, and then be able to interact with the files in the share. However, I just get that annoying dialog, "Windows cannot find '\10.1.2.3\Shared' Check the spelling and try again, or try searching for the item ..." However, I can take that exact same computername/sharename and with net use I can: net use * \\10.1.2.3\Shared * /user:homecomputername\username with this result: Type the password for \\5.69.83.158\C$: Drive Z: is now connected to \\5.69.83.158\C$. The command completed successfully. I can then access the files in Z: in Windows Explorer which was my original intent. Even after Z: is already mapped and the credentials are cached I still cannot bring up \\10.1.2.3\Shared in Windows Explorer. Why does the latter work, but not the former? Edit: Other services work fine, such as RDP. (I have a problem in which I can't SSH home, but I'll consider that separately.)

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by david
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tried to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code. This web-page has a redirect loop. The website is php. The problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents. If I redirect single page it works perfect but there are lots of pages so I need wild card redirect to redirect whole directory . I really need some advice what to do with this problem. Here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^example\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.example\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/example\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] i have other wilcard redirect of whole directory with same code and its working perfect here is the code in .htaccss file which is same as above and working perfect for this directory RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^autos\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] so i dont understand whats wrong with the above code please i really need some expert advice thanks again

    Read the article

  • IP addresses not listed for IIS website bindings

    - by Svinn
    Recently purchased a windows cloud server godaddy. Now i installed iis7 and all other required software. And i have 50.62.1.89 and 2 more public ips. Also i have a private ip 10.1.0.2. Now the problem is am unable to access any website through any public ip. All my public ips are opening default website only. also i cant see pubic ips for IIS website bindings. Only my private ip listed for IIS binding. And in my server also public opening only default website. But am able to open websites using private ip. But my public ip addresses pointed to my server correctly. am able to open my server using remote desktop using public ip. Also as i said already public ip opening default website from IIS without problem. Please help me. Am confused for last 2 days.

    Read the article

  • Create image (EBS AMI) takes forever - possibly caused MySQL Server to break?

    - by fuzzybee
    I'm trying to create an EBS AMI from my running EC2 instance to reuse my LAMP fully configured (for my needs). I got my website up and running yesterday on this EC2 instance my MySQL was working fine until this morning (it's not that difficult to install LAMP thanks to yum so I can't see how I could go wrong with this; having said that, it's always difficult for one to realise his own errors) I have seen "Loading, please wait ..." for a few hours now. How do I know whether this is completed or its progress? Shortly after I tried to create the AMI image from my EC2 instance, I encountered database connection error can't connect to local mysql server through socket '/var/lib/mysql/mysql.sock' I was able to restart mysqld at first. But database connection was down again. This time, I could not restart mysqld anymore. It shows MySQL Daemon failed to start. Could my attempt to create the AMI by any chance cause the MySQL server to reboot or corrupt? I did a lot of searched and have done the following although I think I shouldn't have to do any workaround for MySQL server to work here chown -R mysql.mysql /var/lib/mysql/ I also found this workaround but I'm very reluctant to follow due to my belief and the fact I would need to understand this problem first. Any help would be greatly appreciated. Getting back to searching for a solution for the MySQL server problem ... Thanks, Eric

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Smart card driven membership and door entry system

    - by Rob G
    I'm looking at putting in a smart card driven system at my local sports club (which doesn't have oodles of money), and since they're willing to pay for hardware, and I'm willing to do the technical setup, I was wondering if anyone had any experience in setting something like this up. Writing any software needed is not the problem, I've pretty much got that covered with various open source projects out there and custom code I'll write, but it's more the hardware side I'm not too sure about and I'm looking for advice from people out there. I'm sure there are numerous complications, but on the surface it looks fairly simple. I'd basically like to enable members to swipe/touch a smart card at the door to gain entry to the club, walk up to a touch screen PC and swipe/touch a card reader there to "login" to the system I create, which will allow them to book club facilities etc. I may even want that same card to then activate things like lights or music when they enter the room they've booked. Pretty Eutopian I know, but still, we'd like to get as close as we can. As I said, the software shouldn't be a problem, and on the hardware side, so far I'm looking at: All in one touch screen PC running Windows 7 or Ubuntu USB card reader (not sure which one to buy) Smart Cards (again, never bought these before) Door/lighting hardware that could be triggered (not sure here either) If anyone has any advice on implementing something like this - especially the items I'm not sure about above, and of course anything I've missed out that's crucial, I'd be most grateful. Recommended hardware that you've used for something like this would be fantastic!

    Read the article

  • Slow upload, fast download on Windows 7 64bit system

    - by Malik
    I've got a weird problem in the download speeds on my desktop PC (Windows 7 Home Premium 64bit) are consistently fast (approx. 400kB/s) but uploads are very slow (around 6-10kB/s). This has been going on for the last 3 weeks or so. I am a very competent user and troubleshooter, and have searched online for 2 weeks for a solution, to no avail. Part of the problem is that internet is provided by WiFi by my landlord and I have no access to the router (BT Home Hub router) although I know for sure he wouldn't have the first idea on how to restrict my usage :) (rules that out) Anyway, I've tried: - various drivers (my Wifi 'card' is TP-link TL-WN851N, and I've tried TP-link + Atheros + Qualcomm Atheross drivers, suggested by Microsoft) - various tweaks to network parameters (e.g. as suggested by SpeedOptimser) - various tweaks to Windows 7 services (e.g. disabling/manual-ing unecessary services) - raising and lowering head onto a reasonably firm surface at moderate frequency (jk :D) None of the above have helped, and I'm officialy asking for help now!! Thanks for your time and effort in advance!

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • pptp server 2003 hands out gateway from nic not dhcp server

    - by Pete
    I have created a pptp RRAS server for a handful of clients to connect to. I would like them to use the servers default gateway (.1) for internet access. They are able to successfully connect (& see LAN) but it then cuts them off the internet. I understand that all internet traffic would be routed through the pptp server but that's ok since I have enough pipe. The problem seems to be that: the clients gateway shows as their assigned RAS ip. The clients assigned DNS settings seem to be what is set to the servers nic not what I have specified in dhcp (which is the same server). DHCP relay agent properties points to the nic DHCP is running on (192.168.100.163). .1 is gateway in nic hw properties & dhcp. I have different dns secondary & third entries on my nic properties than what dhcp is configured for. The problem is that I have a 10.10.1.x network that people can not see if they uncheck the gateway option but, they are then unable to see our other hosted sites on the internet.

    Read the article

  • How to setup port forwarding from my Webserver (apache) to my Database server (mysql)

    - by karman888
    Hello again guys, and thank you for your help so far. Here is my problem: I have two remote dedicated servers, one webserver that runs apache, and one db server that runs mysql. The apache server is visible on the internet of course, but the second server is only visible to the apache server because they are connected with LAN. I need to connect to the remote mysql server through internet from my home-pc , but only apache server is visible to my home-pc. How can i setup port-forwarding from my apache server to the mysql server so i will be able to "see" the mysql server from my home-pc? This question is a follow-up from my first question Connect to remote mysql server from my application. Problem is that Mysql server is on LAN in which you answered me and helped me a lot by telling me to do "port-forwarding". I looked over the internet, and i cant find a good how-to to do port-forwarding. I'm an experienced programmer, but have little experience on hardware and networks. I can understand though what must be done, so i just need a litle help to sort things out :) I hope you can help me guys, Thank you in advance p.s. machine that Apache is running is on CentOS, mysql server also CentOS. p.s2 webserver runs WebHostManager i dont know if that makes any difference or it can be made easily through this, i just mention it :)

    Read the article

  • Gaming blew fuse: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting.

    Read the article

  • Dosbox USB print Windows 8.1 64Bit

    - by eCronik
    Worked fine until I've upgraded to Windows 8.1 as well as made the mail programm working (had to get a windows live ID and have to type in a password now, when starting Windows). I did set the USB printer to lpt1 on the local Windows 8 computer and another XP computer via LAN to lpt2 the same way with the same printer on the Windows 8 computer. But now it doesn't work anymore from the Win8 one (where the printer is plugged via USB). Tried already deleting lpt2: on the XP one, as well as lpt1 on the Windows 8, resetted it. Not working... :-( I tried also net use lpt1: \server\printer password "\user:Ute Berger" /persistent:yes of course with the correct server and shared printer name as well as net use \server\printer "\user:Ute Berger" password /persistent:yes . This is the name displayed as a user now. But in C:\users it is named Benutzer1. Tried this one also. Nothing worked. What could be the problem here? What's strange is that when I type "net use lpt1:" on the XP, I get another error (67 - The networkname wasn't found) than trying something I didn't set up like lpt2 (2 - The system can't find the file). Could this be a possible problem as even if deleted something is left blocking on the Windows 8 computer? Please help me - I tried for hours today but all I've got was frustration... Regards Tim

    Read the article

  • Photoshop CS5 performance over network drive (cifs)

    - by grub
    Hello Everyone I did install a QNAP NAS TS410 for a customer (professional photographer) with 3 Hitachi Deskstar 7200rpm 2TB disk configured as RAID5. The NAS and the workstations are connected over a Gigabit network. He and his co-worker are accessing the photos (about 1TB of photos) over a mapped network drive from their windows machines (Windows XP - 32bit and Windows 7 Ultimate - 32bit). Both are using Photoshop CS5 to edit the photos. The problem is that to save a edited photo takes a really long time, it takes about 3 times as long to save a photo as to open it. After some tests I can exclude the network, the NAS and the windows machines as source of the issue. I think the problem is the Photoshop software and its handling of the network drives. Officially network drives are not supported by Adobe. I do not have any experience with the Adobe products, especially with Adobe Photoshop CS5. What are your recommendation to solve the performance issue? Should my customer copy the photos to the local drive, edit them and upload them again to the network drive or is Adobe Drive or Adobe Version Cue the answer? One requirement is that the photos need to be accessible / editable from both computers even when one of them is offline. Adobe Version Cue needs a dedicated service running to be usable, so this solution is not possible as far as I understand the Cue software. Thank you for your input to this issue and have a nice day :-) Greetings grub

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

< Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >