Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 290/376 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Format & Fresh Install Mac OS X Snow Leopard on Mac mini.

    - by sagar
    Hello Every one. I have purchased a DVD of Snow Leopard (Mac OS X 10.6.2) I purchased a Mac mini with Leopard (Mac OS X 10.5.7) I tried to install Mac OS X 10.6.2 Everything went perfectly. System was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. (means installation didn't format every thing - means installation was done on upgrade basis.) Now, my system works with very low speed. Previous performance of Mac mini was double as compare to current upgrade version. Now - my question are as follows. Does an upgrade installation causes the performance issues in Mac OS X? Or is Snow Leopard too demanding for the Mac mini? ( 2 Ghz Intel Core 2 Duo, 1GB RAM - is this configuration OK for Snow Leopard? ) Does a fresh install work better than an upgrade?

    Read the article

  • Fix MBR from installed Windows Vista

    - by Danilo
    Hi guys, I have a quite strange problem. I had a system with Vista and Ubuntu installed. We always use Vista and Ubuntu was something we really did not need. BUT: to boot, GRUB was used (I guess grub2). Now, while being in Vista I cancelled the Ubuntu partition and with it also GRUB. Now the system does not boot anymore. I tried to reinstall Ubuntu, but I had some problems with the CD. At the moment, when the system boots I get into the GRUB shell. From there, I am able to boot Windows Vista with some commands like this ones: grub> title windows rootnoverify (hd0,msdos3) chainloader +1 boot Now the question is: if I am able to boot in Windows Vista with this trick, is it possible to fix the MBR from inside the installed windows Vista with some command/tool of Vista itself? I shall probably mention that we are not interested in double boot at the moment. We only want Vista to start. I can sum up the question like this: is there a way to fix the MBR from the installed version on Windows Vista, considering that GRUB is at the moment installed? I hope I was clear enough. Thanks for your help.

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Passive FTP Server Port Configuration Troubles Win2003

    - by Chris
    Win2003 Ports 20 & 21 are open IIS6 - Direct Metabase Edit enabled Configured FTP service passive range to 5500-5550 5500-5550 added to windows firewall iisreset and double checked by restarting ftp service nothing has changed, when I connect and enter passive, it still hangs when ever I try to LIST or transfer files. Active is just as useless. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\user>ftp ftp> open x.x.x.x Connected to x.x.x.x. 220-Microsoft FTP Service xxxxxxxxxxxxxxxxxx 220 xxxxxxxxxxxxxxxxxx User (x.x.x.x:(none)): user 331 Password required for user. Password: 230-YOUR ACTIVITY IS BEING RECORDED TO THE FULLEST EXTENT 230 User user logged in. ftp> QUOTE PASV 227 Entering Passive Mode (82,19,25,134,21,124) ftp> ls 200 PORT command successful. 150 Opening ASCII mode data connection for file list. and it hangs.. Now I can see from microsooft documentation that on newer windows releases, additional steps such as these are suggested, but they dont work on 2003... netsh advfirewall firewall add rule name=”FTP Service” action=allow service=ftpsvc protocol=TCP dir=in netsh advfirewall set global StatefulFTP disable is there anything I am missing, what is this StatefulFTP malarkey at the end EDIT I can connect and transfer binary files using WinSCP client - Therefore the problem must be with my ftp commands no? Can anyone see anything wrong with my windows ftp client example? why would it hang on ls, i tried QUOTE LIST as well, and that just hangs, and the windows ftp client doesnt work in active, it hangs if I try to go "binary" then put - This worked before I added 5500-5550 on the router. I have since added this range to the router but no difference to the windows ftp client.

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • Adding add-ins to excel - strange communicates

    - by Jacob
    I am using Excel 2010 and 2013. I would like to add an excel add-in from page http://xlloop.sourceforge.net/ . There is file with name xlloop-0.3.2 and extension Microsoft Excel XLL Add-In. I added this file from menu File - Options - Add-Ins - In combobox Manage i choosed Excel Add-Ins - Go... - Browse and I choosed my file. I see the following comunicate: "C:\...\xlloop-0.3.2.xll" is not a valid add-in. Thus, I do next attempt. I go from menu File - Open - and I choosed my file. I see comunicate: The file you are trying to open "xlloop-0.3.2.xll", is in a difference format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? After I clicked Yes I see a lot of signs (something like from chinese :)) My last attempt was double clicked on file. I see: The file format and extension of "xlloop-0.3.2.xll" don't match. The file could be corrupted or unsafe. Unless you trust its source , don't open it. Do you want open it anyway? After clicked yes I see something like the second attempt. I am really very confused because some of my friends have the same version of excel and they don't have these communicates. Do you have any idea where is the problem in my excel? I very need this addin to work with Java. I will very grateful for your help! Thanks in advance!

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • Suggestions for accessing SQL Server from internet

    - by Ian Boyd
    i need to be able to access a customer's SQL Server, and ideally their entire LAN, remotely. They have a firewall/router, but the guy responsible for it is unwilling to open ports for SQL Server, and is unable to support PPTP forwarding. The admin did open VNC, on a non-stanrdard port, but since they have a dynamic IP it is difficult to find them all the time. In the past i have created a VPN connection that connects back to our network. But that didn't work so well, since when i need access i have to ask the computer-phobic users to double-click the icon and press Connect i did try creating a scheduled task that attempts to keep the VPN connection back to our office up at all times by running: >rasdial "vpn to me" But after a few months the VPN connection went insane, and thought it was both, and neither, connected an disconnected; and the vpn connection wouldn't work again until the server was rebooted. Can anyone think of a way where i can access the customer's LAN that doesn't involve opening ports on the router needing to know their external IP customer interaction of any kind Blah blah blah use vpn vnc protocol has known weaknesses you are unwise to lower your defenses it's not wise to expose SQL Server directly to the internet you stole that line from Empire Customer doesn't care about any of that. Customer wants things to work.

    Read the article

  • 100% iowait + drive faults in dmesg

    - by w00t
    Hi, I have a server on which resides a fairly visited web app. It has a raid1 of 2 HDDs, 64MB Buffer, 7200 RPM. Today it started throwing out errors like: kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen kernel: ata2.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) kernel: ata2.00: status: { DRDY } kernel: ata2: hard resetting link kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: configured for UDMA/133 kernel: sd 1:0:0:0: timing out command, waited 7s kernel: ata2: EH complete kernel: SCSI device sda: 976773168 512-byte hdwr sectors (500108 MB) kernel: sda: Write Protect is off kernel: SCSI device sda: drive cache: write back All day long it has been in load higher than 10-15. I am monitoring it with atop and it gives some bizarre readings: DSK | sda | busy 100% | read 2 | write 208 | KiB/r 16 | KiB/w 32 | MBr/s 0.00 | MBw/s 0.65 | avq 86.17 | avio 47.6 ms | DSK | sdb | busy 1% | read 10 | write 117 | KiB/r 17 | KiB/w 5 | MBr/s 0.02 | MBw/s 0.07 | avq 4.86 | avio 1.04 ms | I frankly don't understand why only sda is taking all the hit. I do have one process that is constantly writing with 1-2megs but what the hell.. 100% iowait?

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • Cron won't use msmtp to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtp. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks! EDIT: Note: I've written "msmtpd" by mistake. Its not a daemon but rather an SMTP client named just "msmtp" (without the "d" ending). It is executed on demand and it is not running in the background all the time. When I try to send an email by using msmtp like that it works: echo "test" | msmtp [email protected] On the far side, in the logs of the SMTP server I read: Nov 2 09:26:10 S01 postfix/smtpd[12728]: connect from unknown[CLIENT_IP] Nov 2 09:26:12 S01 postfix/smtpd[12728]: 532301C318: client=unknown[CLIENT_IP], sasl_method=CRAM-MD5, [email protected] Nov 2 09:26:12 S01 postfix/cleanup[12733]: 532301C318: message-id=<> Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: from=<[email protected]>, size=191, nrcpt=1 (queue active) Nov 2 09:26:12 S01 postfix/local[12734]: 532301C318: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=0.62, delays=0.59/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to command: IFS=' ' && exec /usr/bin/procmail -f- || exit 75 #1001) Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: removed Nov 2 09:26:13 S01 postfix/smtpd[12728]: disconnect from unknown[CLIENT_IP] And the Email is delivered to the target user. So it looks like that the msmtp client is working properly. It has to be something in the cron/msmtp integration, but I have no clue what that thing might be. Can you help me?

    Read the article

  • Linux RFID reader HID Device not matching driver

    - by blietaer
    Hello, I got a RFID reader (GigaTek PCR330A-00) that is meant to be recognized under linux/windows as a (Human Interface Device) keyboard/USB. I hate to say this but it is working as a charm under Win7 but not "really" under Linux. Under Debian-like distros (x/k/Ubuntu, Debian,..), or Gentoo, or... I just can't have the device working at all: the device scan well (it has its USB 5V, so it is happy/beeping/blinking) something happened in the dmesg, but no immediate screen display of the RFID Tag code as expected (and seen under win7) Support is claiming it is ok under RHEL or SLED "enterprises" distros... and I must admit I saw it working under a RHEL4... I tried stealing the driver but did not succeed having my reader working... My question is thus double: 1./ How can I hack the kernel to add support to my device (simply register PID/VID?) ? 2./ What is different at all in a "enterprise" proprietary distro? how can I re-use it? Thank you for any hint/help. Cheers,

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Unable to Access Certain Websites

    - by codejoust
    Through a local network, all computers except one ubuntu machine can access 1. Adobe.com 2. Icann.org 3. Apache.org 4. Example.com. The ubuntu machine returns (in firefox): "Though the site seems valid, the browser was unable to establish a connection." Furthermore, when I traceroute those websites using the ubuntu machine, they all return ubuntu.local, and it ends there: (traceroute to icann.org (192.0.32.7), 30 hops max, 40 byte packets 1 ubuntu.local (192.168.1.105) 3000.791 ms !H 3000.808 ms !H 3000.814 ms !H I've checked the hosts file, and there isn't anything in there, and I have an apache server there so if it was redirected to localhost, I'd probably see the localhost webroot page. Thanks in advance! user@ubuntu:~$ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 The Ubuntu Machine is one of six on the network. I'm using opendns for dns, so I do think that should be a problem.

    Read the article

  • Is there a way to avoid Wacom Control Panel to stop showing in certain cases ?

    - by S.gfx
    This is the problem: Suddenly, you double click on desktop wacom tablet settings icon, and it wont show the dialog. It appears to be loaded as it's shown down in the windows taskbar. I suspect is caused by change of resolution or some setting, maybe suddenly it sets the origin of the panel dialog at some 3000 thosand pixels to the right or something. I have digged that wacom_tablet.dat file to see if I fix it changing some value there, but looks like a log, a history, more than a ini for settings... And anyway does not solve it. My solution is having always a very complete settings file done and backed up to restore (with wacom utility for this, which in previous driver versions did not exist) when this happens, but is counter productive: You change the settings even per each project(and software) needs. I have seen it happenning with Cintiq 12", intuos4 A6, Graphires, Intuos 1. Is just me, doing something weird every time? I doubt it, is normal use, I am amazed that seems no one else had this prob(or nobody asked), happens often with typical use. Maybe is because am setting a shortcut in the desktop? Weird as it works perfect till some random moment... (Have digged Wacom's forums and FAQs, then here, but nothing related to it... Neither in "related questions".) The thing happens in Win XP, 7, etc. Done so during years in my experience, and several times at work, which is worse.

    Read the article

  • Can't login to phpMyAdmin on a WAMP server running Windows 2008

    - by Richard West
    I am setting up a new server. I have installed Apache 2.2.17, PHP 5.3.3, MySQL 5.1.53 and phpMyAdmin 3.3.8 running on a Windows 2008 (32 bit) OS. I have configured Apache and PHP so they appear to be working fine. I have created the standard test php page with the following code and everything appears to be working fine. <?php //index.php phpinfo(); ?> I also see the mySQL and mySQLi section in the above webpage, so it appears that that I have the proper extensions loaded for mySQL access. The problem that I am having centeres around myPHPAdmin. I have this installed and I can access to the login screen at http://localhost/pma I login using "root" and the password I have setup for root. After a delay of 30 seconds or so the web page goes to a blank screen, and the url is now http://localhost/pma/index.php?token= No error is ever displayed - however nothing usable is either. I have confirmed that mySQL is running by going to the command line and logging into mySQL from there. I have double checked my configuration but I am not having any luck getting this to work. I have also disabled the Windows firewall, but that did not change anything. I installed mySQL using the standard port 3306. Any advice would be greatly appreciated.

    Read the article

  • CUPS causes printer to click and doesn't print

    - by Pez Cuckow
    I'm suffering a strange problem with my Cannon iP4850 when trying to use CUPS on a Raspberry Pi (this is not RPi specific, please do not vote to move it). When I plug the printer into my Laptop (OSX) or my Desktop W7 it identifies as a iP4800 and prints perfectly. So I plug it into the Pi (running debian), set it up in CUPS enable sharing and can now see the iP4800 series shared on the network. However if I print to it (using AirPrint etc...); the file gets to CUPS safely (shows in the queue) but when it tries to print the printer clicks (like a loud thunk) 3/4 times and then gives in, with a double amber flashing light. In cups it shows as job completed. Do you know why using the pi and cups would cause what appears to be a hardware fault and what I can do to fix the problem or to provide further debug info? Thanks for your time! Description: Canon iP4800 series Location: Lounge Driver: Canon PIXMA iP4800 - CUPS+Gutenprint v5.2.9 (color, 2-sided printing) Connection: usb://Canon/iP4800%20series?serial=2239B2 Note: I've tried deleting and re-adding the printer to the Laptop, Desktop and PI and the results are always the same Log for plugging in printer and printing (attempting to) something until the printer turned off again pi@pezpi /var/log $ dmesg [ 7284.176336] usb 1-1.2: new high speed USB device number 8 using dwc_otg [ 7284.279703] usb 1-1.2: New USB device found, idVendor=04a9, idProduct=10d5 [ 7284.279750] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7284.279771] usb 1-1.2: Product: iP4800 series [ 7284.279786] usb 1-1.2: Manufacturer: Canon [ 7284.279800] usb 1-1.2: SerialNumber: 2239B2 Setting cups to verbose: Change loglevel in cupsd.conf to debug (or debug2) pi@pezpi /var/log $ sudo vim /etc/cups/cupsd.conf pi@pezpi /var/log $ sudo /etc/init.d/cups restart [ ok ] Restarting Common Unix Printing System: cupsd. pi@pezpi /var/log $ Log from $ /var/log/cups/error_log is at http://pastebin.com/7VZMRMrG (too large to post here) The log contains - in order (deleted the log and then did the beneath) Restarting the cups server Attempting to print a test page x2 Printing from 192.168.1.90 via AirPrint Printing from 192.168.1.90 via Network Print Turning the printer off and on again

    Read the article

  • Reducing video mode switching during Linux boot

    - by Zack
    When I boot up my desktop computer, which only has Linux on it, the video mode and/or console font gets switched four times: When GRUB starts, it switches from 80x25 text to a graphical mode so it can draw a pretty background behind its menu; GRUB then goes back to 80x25 text after I pick something from the menu; When the KMS driver for my video card loads, it switches to a much higher-resolution text mode (I don't know if this is a hardware text mode or not); Finally X starts and it goes graphics and stays that way. I think this last switch does not change the resolution of the video mode, only the graphicalness. I'd like to get rid of as many of these mode switches as possible. Ideally, when GRUB takes over from the BIOS it would go directly to the same high-resolution text mode that the KMS driver selects, and the display would stay in that mode till X starts and brings up graphics. I am under the impression that this is possible by mucking with the kernel command line and/or the GRUB console module load parameters, but I don't know the details. GRUB 1.98+20100706, kernel 2.6.32.15 using Nouveau video drivers. Distro is Debian unstable. Please no answers that involve recompiling anything or cobbling together bleeding-edge kernel/driver combinations, I don't care enough about this to go to that much trouble. EDIT: Tobu suggests setting GRUB_GFXMODE to the full pixel resolution of the monitor, and GRUB_GFXPAYLOAD_LINUX=keep to avoid the mode switch after the menu goes away. This does part of what I want, but winds up being worse overall. There's no mode switch after the menu, but there's still a painfully-slow screen repaint (I should probably just give up on GRUB's gfxmode, it's waaaay too slow at 1920x1200). More seriously, there's now a double mode switch when nouveaufb loads, along with fun-looking error messages in dmesg [ 5.923798] [drm] nouveau 0000:02:00.0: allocated 1920x1200 fb: 0x40250000, bo ffff8801ba5f4600 [ 5.923802] fb: conflicting fb hw usage nouveaufb vs EFI VGA - removing generic driver [ 5.923821] [drm] nouveau 0000:02:00.0: PFIFO_INTR 0x00000010 - Ch 1 ("PFIFO_INTR" message repeats 400+ times) [ 5.925609] Console: switching to colour dummy device 80x25 [ 5.925802] Console: switching to colour frame buffer device 240x75

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Search for partial IP address using Windows Search?

    - by Dr. Dre
    I have a folder, c:\projects\, added to Windows Index. I know the indexing is working because I search for stuff in this folder all the time and the results come up very fast, and I've never noticed any accuracy problem until now. (I have had to tweak Indexing options to expand which file types have their contents indexed rather than just the file name, etc, but after that Search has worked pretty well for me). I've encountered a problem while trying to search for references to a particular IP address subnet. I'm trying to find all references to IP's with the pattern "192.168.220.xxx" (AKA, the 192.168.220.0/24, AKA 192.168.220.0/255.255.255.0 IP/netmask). Within Windows Explorer: c:\projects**.* is indexed c:\projects\work\project1\network_list.txt contains several "192.168.220.xxx" IP's Indexing status says all items are indexed (193,000 items). When I try to search for partial IP match, there are no search results. Tried searching for: 192.168.220, 192.168, 192.168.220., 192.168.220., 192.168.220.?, 192.168.220.??, 192.168.220.???, 192.168., 192.168.. Also tried variants of all the above surrounded with double quotes. All the searches returned 0 results. Within MS Outlook 2007: My mailbox, and all my offline .pst's are indexed. I search in Outlook pretty frequently, so I'm pretty sure indexed searches work across inbox and all .pst's. Indexing status in Outlook says all items are indexed. I also have references to these IP's in email, and I'd like to find all of them. Basically same deal as above, can't search for "192.168.220.xxx" IP's. Any way to fix this?

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >