Search Results

Search found 13262 results on 531 pages for 'complete validation'.

Page 397/531 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Host system resets (crashes) when using VMWare or VirtualBox and 64-bit guest systems

    - by sinni800
    I have been trying to install virtual systems on VMWare for a while now and encountered strange behaviour from my PC. The behaviour is as follows: On "automatic" virtualization mode it either outputs a cryptic error message (can MAYBE give later, if I can reproduce) right on startup (before even the BIOS) or it resets the complete HOST system (blackscreen, bios...) If I install a Windows XP on it it works well on "binary translation" mode. If I try installing Linux on it, in "binary translation" mode it crashes 1 or 2 seconds after I hit enter on the GRUB selection screen (after the first page of kernel messages rolled in) Using VirtualBox it crashes right in the BIOS. It gave me a Bluescreen though! 0x00000101: CLOCK_WATCHDOG_TIMEOUT: a clock interrupt was not received on a secondary processor within the allocated time interval NEWS: I tried VirtualBox again and it did not completely crash the computer this time. It gave me a critical error and a log file: http://pastebin.com/yKZSDs91 In conclusion, it will crash instantly if VT-x is activated. If not, it seemingly only crashed if I try to install something with 64 bits. Another update: Yes, it ONLY crashes when the guest is 64 bit! What I tried: Reinstalling Windows (my Windows installation was quite broken so it seemed natural. Didn't work though.) New BIOS What I am certain of: Virtualization extensions are activated in the BIOS What my computer specs are: ASUS P8P67 LE mainboard, newest BIOS/EFI firmware Intel Core i5 2500k Ati Radeon HD 5770 16 GB Corsair 1333mhz DDR3 RAM, 4 X 4 GB

    Read the article

  • How do I restore tab-completion on shell variables on the bash command-line?

    - by Eric
    I've long set my most-recently visited directories to shell variables d1, d2, etc. On an ancient Fedora machine I could type a command like $ cp $d1/ and the shell would replace $d1 with text like /home/acctname/projects/blog/ and would then show me the contents of .../blog, like any tab-completion. Now, both ubuntu wheezy/sid and fedora 16 just -escape the '$', and naturally there are no completions to show. You can see this behavior in action in an OSX Terminal window. On 10.8, do something like ls $HOME/ to see what I mean. Is there a bash shell variable or option that can restore the old behavior? man bash suggests this is a bug: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. I get the above described completion when a token starts with '~' or a letter. It's just '$'-completion that's broken.

    Read the article

  • cant remove index.php from url in codeigniter

    - by Ashiq
    iam new in codeigniter frame work,i want to remove index.php from url and tried many times bt its not working..... here is my .htaccess file RewriteEngine on RewriteBase /test/ RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ test/index.php/$1 [L,QSA] iam also change $config['index_page'] = ''; bt when running this i got an error message... Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at [email protected] to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. here is my appache error log [Sat Jan 05 16:59:53.265625 2013] [core:error] [pid 3976:tid 1152] [client ] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. pls help to solve this........ Thanks

    Read the article

  • Very Slow DSL (ethernet) speed [New Interesting Update]

    - by Abhijit
    Very IMPORTANT and INTERESTING UPDATE: Due to some reason I just thought to do a complete new setup and this time I decided to again have openSUSE plus ubuntu. So I first reinstall lubuntu and then I installed OpenSUSE 12.2 (64 bit). Now, my DSL speed is working very normal and fine on opensuse. So this is very scary. Is it possible for any operating system to manipulate my NIC so that it will work fine only on that operating system and not on another os? Regarding positive thinking and not being paranoid, what is it that makes ONLY suse to get my NIC to work at normal speed but ubuntu can not do it? Not even fedora? Not even linux mint? What all these OS are lacking that enables suse to work great? == ORIGINAL QUESTION == I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • Unable to Align Layers in Photoshop Properly with CS2

    - by Jonathan Sampson
    Cannot Align Semi-Transparent Items? Windows Vista, Photoshop CS2. Steps to repeat: Create new document Fill a circle on a new layer Drop opacity of filled circle to 10% Create new empty layer below circle layer Merge empty layer with filled circle layer Select entire canvas Attempt to align layers to selectionlayer > align layers to selection > vertical centers I get the following error: Could not complete the Vertical Centers command because there are no layers to be moved. Clearly this is not true, as I'm selecting the layer with the semi-translucent ball on it. Now, if you had tried this same command prior to step 5 (when the layer was at 10% opacity) it would have worked. Is there some way around this problem? I need to move layers around that begin as transparent items, with a layer opacity at 100% where 100% of the layers opacity results in showing objects that are themselves not-very opaque. I've confirmed on another machine that this problem doesn't exist in CS3. I may exist in earlier copies of Photoshop, but I only have access to CS2 (has the problem) and CS3 (does not have the problem).

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • How do I cancel windows server 2003 repair install?

    - by Kilgore2k
    System: Windows 2003 Server Enterprise Scenario: NTDS db is corrupt and all attempts to fix with esentutl fail. Ran chkdsk which seemed to repair disk error and give access to the ntds.dit file but still esentutl fails. (Attached the drive to a different server to run the esentutl) Error: Access to source database '[path to copy of]/ntds.dit' failed with Jet error -1022. Operation terminated with error -1022 (JET_errDiskIO, Disk IO error) after 0.170 seconds. This error occurs on any disk I cpoy the files to including original location in C:\WINDOWS\NTDS\ Now enter the "Stupid!" and "what was I thinking!?" part (must be the late hour...) Stupid: No updated backup - after using a backup I get a network password error in the lsass error. what was I thinking!?: Started the install repair from the original CD but the install fails since the AD fails to start. Now I cant boot into any mode (safe mode, AD restore etc) nor complete the repair install. I would really like to avoid a fresh install since I have the Exchange server on this DC and would rather migrate to a new server than have to start from scratch. Thanks!

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Redirection of outbound UDP port.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. I tried, but can't seem to get iptables to do what I want. I'm not sure if it's my lack of skill, or if I'm trying the wrong solution. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • Exchange emails not delivering for one user

    - by Cylindric
    We have an Exchange infrastructure going through a migration from 2003 SP2 (call it ExOld) to 2010 (ExNew). All users are now on the new server, but mail is still being directed to ExOld until testing is complete. ExNew sends emails directly to the internet. For one particular user, emails don't seem to be being reliably delivered, but the odd thing is that it's not all emails. I can see external emails in his inbox. If I send an internal email it works fine. If I send an email from Gmail to him it doesn't get through. If I telnet from outside to ExOld I can send an email to him. If I telnet from outside to ExNew I can send an email to him. This is a transcript that results in a successful send: 220 ExOldName Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Mon, 22 Oct 2012 10:55:26 +0100 EHLO test.com 500 5.3.3 Unrecognized command EHLO test.com 250-ExOldFQDN Hello [MyTestExternalIp] 250-TURN 250-SIZE 250-ETRN 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-CHUNKING 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XEXCH50 250 OK MAIL FROM:[email protected] 250 2.1.0 [email protected] OK RCPT TO:[email protected] notify=success,failure 250 2.1.5 [email protected] DATA 354 Start mail input; end with . Subject:Test 1056 Test 10:56 . 250 2.6.0 Queued mail for delivery quit 221 2.0.0 ExOldFQDN Service closing transmission channel Emails go through Symantec Cloud, but their "Track and Trace" shows the messages going through, with a "delivered ok" log entry. 2012-10-22 09:19:56 Connection from: 209.85.212.171 (mail-wi0-f171.google.com) 2012-10-22 09:19:56 Sending server HELO string:mail-wi0-f171.google.com 2012-10-22 09:19:56 Message id:CAE5-_4hzGpY2kXFbzxu7gzEUSj5BAvi+BB5q1Gjb6UUOXOWT3g@mail.gmail.com 2012-10-22 09:19:56 Message reference: 135089759500000177171130001194006 2012-10-22 09:19:56 Sender: [email protected] 2012-10-22 09:19:56 Recipient: [email protected] 2012-10-22 09:20:26 SMTP Status: OK 2012-10-22 09:19:56 Delivery attempt #1 (final) 2012-10-22 09:19:56 Recipient server: ExOldIP (ExOldIP) 2012-10-22 09:19:56 Response: 250 2.6.0 Queued mail for delivery I'm not sure where to look on the old (or new) server for information as to where the mails are ending up.

    Read the article

  • Assigning cores to VM in vSphere

    - by user114933
    Complete vSphere newbie here... Background: So, I have a 12 core machine with 24 VMs on it. Currently, all the processing power is shared between these VMs equally. The question: Can I configure one VM to be given two CPU's worth processing no matter what's happening on the other machines? My Research: I tried two things in vSphere... I set the reservation and limit on one VM to equal the same as two cores. To test if my objective was being reached, I measured the time it would take to gzip a file when other VMs were running nothing and when other VMs were running CPU intensive operations. I expected the time to gzip the file would be the same because this VM gets priority for some processing. Unfortunately, the time taken to gzip the file when other VMs were running something was significantly more than when other VMs were not running anything. I tried setting the Hyperthreaded Core Sharing mode to Internal hoping that this would mean that my VM would get at least an entire core to itself. This did not work either. Thanks in advance!

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Formatting pwd/ls for use with scp

    - by eumiro
    I have two terminal windows with bash. One is local on the client computer, another one has an SSH-session on the server. On the server, I am in a directory and seeing a file I would like to copy to my client using scp from the client. On the server I see: user@server:/path$ ls filename filename I can now type scp in the client shell, select and copy the user@server:/path from the server shell and paste to the client shell, then type slash and copy and paste the filename and append a dot to get: user@client:~$ scp user@server:/path/filename . to scp a file from the server to the client. Now I am searching for a command on the server, that would work like this: user@server:/path$ special_ls filename user@server:/path/filename which would give me the complete scp-ready string to copy&paste to the client shell. Something in the form echo $USER@$HOSTNAME:${pwd}/$filename working with relative/absolute paths. Is there any such command/switch combination or do I have to hack it myself? Thank you very much.

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • IBM WebSphere Host On-Demand (HoD): Cannot run program with "runprogram" command. What can I do?

    - by kokbira
    I access a system that uses an IBM Host on Demand client. I am tryig to create a macro to do a hard task (more than 90,000 keys must be pressed to complete it), but to do it easier I need to call some "external" aplications using "runprogram" tag. But I don't know why it does not function (following IBM help - http://publib.boulder.ibm.com/infocenter/hodhelp/v11r0/index.jsp?topic=/com.ibm.hod.doc/doc/macro/macro.html - did not help...). I am running in Firefox 3.6 and my Java version is jdk1.6.0_20. Below, an example of macro that should function, but didn't. <HAScript name="TEST4" description="" timeout="60000" pausetime="300" promptall="true" blockinput="false" author="wingman" creationdate="05/05/2011 16:14:31" supressclearevents="false" usevars="false" ignorepauseforenhancedtn="true" delayifnotenhancedtn="0" ignorepausetimeforenhancedtn="true"> <vars> <create name="$intReturn$" type="integer" value="0" /> </vars> <screen name="Tela1" entryscreen="true" exitscreen="false" transient="false"> <description > <oia status="NOTINHIBITED" optional="false" invertmatch="false" /> </description> <actions> <runprogram exe= "'c:\\Program Files\\Windows NT\\Accessories\\Wordpad.exe'" param="'c:\\a.txt'" wait="true" assignexitvalue="$intReturn$" /> <message title="" value="'Return value is '+$intReturn$" /> </actions> <nextscreens timeout="0" > </nextscreens> </screen> </HAScript>

    Read the article

  • Windows XP cannot read DVD burned on Ubuntu

    - by webcrawl
    The story : I have burned a DVD disc on Ubuntu using Brasero. When file burning was complete, I canceled the burning of checksum to DVD. Then I put that DVD on Windows XP (SP3) and copied all the files from DVD to the hard drive (no errors when copying). When that was done I discovered that all copied files are not readable. What is more, all the files on that DVD also shown to be not readable, even though all file names, directories were in their place. What I found out? Windows detect that the disc is in CDFS (CD-ROM File System). Disc is clean as new, have no scratches. All files while opened in Notepad++ look like "NULNULNULNULNUL" in one line. The size of files is normal. Other discs that are recognized as CDFS can be read with no problems. What I tried? Starting CDFS service in Windows registry. Result - a new device in Windows Device Manager (JUBS JGH2ZCT SCSI CdRom Device). Removing my CD/DVD device from Device Manager. Result - Windows restarted the system and reinstalled the driver. So... how to read the DVD, when I have no access to any other PC, any other OS?

    Read the article

  • Acer Aspire Touchscreen Not Responding

    - by Jerry
    I have an Acer Aspire z3101 all-in-one Touch screen running Windows 7 Home Premium. I am unable to get the screen to respond to my touch. I have checked the system and all drivers are ok according to comp. Using the settings to setup the pen and touch and to calibrate does not work. My touch brings no response at all. I had reinstalled the applications - Windows Touch Pack and Acer Touch Portal which didn't help. Today I done a complete reinstall returning comp back to factory settings and still there is no response. At first I thought that perhaps a driver or file was missing but now doubt that due to the reinstall not helping. Have you any idea what could be causing this problem? I am now at the stage of pulling my hair out and haven't had too much help from Acer. I have had the computer for just over a year now and so it is no longer under guarantee and this has me very worried due to my being unemployed. Any help is much appreciated. Thank you.

    Read the article

  • Why am I experiencing random connection timeouts? (CentOS)

    - by Ryan
    I have a CentOS server setup that currently hosts several websites (all relative of each other in some form or another). As of recently throughout the day at the most random times the website speed will lag to a crawl and eventually hit a connection timeout. When I say random times this typically happens anywhere between 10am and 1pm usually, however, this morning this happened to me at 8am. I do not have a lot of familiarity with server knowledge as far as what I am looking for in this situation. What are some possible causes of why my server is slowing the websites down to a complete crawl or timing out? Are there specific things I should be checking for when this happens? I have noticed using: tail /var/log/httpd/access_log That usually when this down time occurs there are lot of IP addresses related to BingBot, Googlebot, and sometimes various bots or spiders that I am unfamiliar with. Could this be related and if so how can I avoid this from causing my websites to lag out? Thanks in advance for any help or advice. The websites that are timing out are built with PHP and use a MySQL database to display information.

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >