Search Results

Search found 17366 results on 695 pages for 'memory card'.

Page 607/695 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • I (stupidly) converted a TrueCrypt encrypted disk to GPT in Disk Management: now TrueCrypt won't mount it

    - by asilentfire
    Backstory: After moving a Macrium Reflect disk image from my TrueCrypt external drive (with whole disk encryption) onto a unencrypted drive and using Windows PE with Macrium Reflect to restore my internal disk to the recovery image on the external unencrypted drive, my Windows 8 failed to boot. I then went back and also recovered the System Partition (looking now, it is currently EFI), but I still couldn't boot into my backup.. I was in a hurry to get online for something so I just did a clean install of Windows 8, without the backup.. After I installed Windows 8, I went into Disk Management out of curiosity to see if there were other partitions with Windows 8 that Macruim might have missed, and there is (by default) a Recovery Partition of 100MB. My memory of this is hazy, as I was trying to get up and running for an exam at 4 AM: Something in Disk Management prompted me to convert my encrypted external drive to GPT.. I have no idea why I did this, but I went ahead and allowed it to convert my TrueCrypt drive to GPT. Now, I can't mount the drive in TrueCrypt.. Disk Management sees it as Disk 1, Basic, and Unallocated. I tried converting it back to MBR with Disk Management, but no dice with TrueCrypt :( If I try to mount the disk in TrueCrypt I get the message: Incorrect password or not a TrueCrypt volume I should never have messed with a Truecrypt drive in Disk Management, but I did. I have important college work in that drive, and fear I have lost it forever. PLEASE HELP

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Cannot install windows. Compaq Presario CQ62

    - by Matthew
    I bought a used Compaq Presario CQ62 for cheap, and went to install windows on it. I formatted the partition and went to install when I got this error.... Windows cannot install required files. The file may be corrupt or missing. Make sure all files required for installation are available and restart the installation. Error code: 0x80070017 I have used this disk before with no problems, but internet searching suggested I burn one at 2x speed because that helps for some reason... I'm burning one now, but my question is, why would I get this error, OTHER than the disc being bad? I'm pretty certain this one isn't as I have used it before... (ok so the slowly burned cd (using imgburn) didn't work either so it's DEFINITELY not the disc) Thanks in advanced for any answers Also I took one stick of ram out because internet searching also suggested that, but it didn't make a difference. Also I ran memory and hard drive checks and they passed fine. Also I reset the motherboard options to default What could it be!? Help I'm completely stumped...

    Read the article

  • Ubuntu 10.10 - PC shutdown before boot shortly after BIOS loads

    - by clem
    Hi - Since installing Ubuntu 10.10 from Karmic I've started getting problems with starting up the PC. I've done a complete wipe (Boot and Nuke) of the hard drive and reinstalled Ubuntu 10.10 but the problem still occurs. There is no dual boot on the PC, just Ubuntu. Here is the problem: Each morning, when I turn the PC on from being off overnight, the PC starts up and loads the BIOS. I get the following message : "Verifying DMI Pool Data... K8 NPT Data Change...Update New Data to DMI!....... Then poof the computer shuts off. However, after switching the computer back on around 6 or 7 times after it's turned itself off, it will eventually boot up without any problem. Also, once up and running for a while, I can shutdown and restart the PC first time, without any issues. I have also noticed a problem with the USB mouse being recognised and once I finally get the computer booted up, I need to unplug and then plug the mouse back in to get it working. I've opened the PC up and checked the connections (cables, cards and memory) and it all seems fine. The main issue with troubleshooting this problem is I cannot test any suggestions or fixes until the next morning because once the computer is up and running it will remain so! I do not leave the computer on overnight to save energy. So.. is this a hardware / boot software issue? This is a very odd problem and I have googled to no avail. Any suggestions? Many thanks Clem

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • Hardware testing tool/suite

    - by Aviator
    Hi All, I just bought a new core i5 system (assembled) and started installing Windows 7. It was failing for many times and at some point got installed. After that, frequent crashes related to MEMORY. So checked the RAM using memtest86+ and found many errors.I got it replaced with the vendor and now if i install ANY OS, at some point in installation it either freezes completely with no response for hours, or restarts automatically. I tried installing Windows 7, Windows Vista and Ubuntu 9.10. I tested the new RAM again and found no problems in about 2 passes using memtest86+. I even updated the BIOS using bootable USB and even the problem persists. I am really not sure which hardware is causing trouble. I dont have any OS inside it, so i have to check using bootable CDs DVDs and USB only. Please advice on how to proceed. Are there any suites/ separate tools for checking integrity of each hardware parts and troubleshoot it? I wanted to confirm which part is problematic before going for replacement. Thanks a lot! This is the config: Core i5, MSI P55-GD65, GSKill 2x2GB, Seagate 500GB 7200rpm, CM Extreme 600W PSU, Saphhire Radeon 5770 1GB, LG DVD Writer

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • Sound doesn't work anymore after replacing RAM

    - by thejh
    Hello, today, I replaced one old RAM module with two newer, bigger ones, but now, the sound doesn't seem to work anymore. Already ran alsaconf and it didn't help. Output of lspci for the audio device: 00:07.0 Audio device: nVidia Corporation MCP67 High Definition Audio (rev a1) Subsystem: Giga-byte Technology Device a002 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 (500ns min, 1250ns max) Interrupt: pin A routed to IRQ 21 Region 0: Memory at f5100000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable- Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [6c] HyperTransport: MSI Mapping Enable+ Fixed+ Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel The audio device is onboard and has six configurable outputs, two or so are also capable of being an input (if I remember it correctly), but I don't know how to control it under linux. Does somebody know how/whether replacing the RAM could be related to my problem and/or how to fix it?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • How do I know if I managed to completely remove an undetected trojan?

    - by ubuntuisbetter
    I catched a trojan that uses explorer.exe to reproduce itself in case of deletion of its autostart entry or main exe file in Programs/x. It had already tried to contact a suspicious server over explorer.exe, blocked that via my firewall. I: Removed the autostart entries from the registry Looked through my services if there was anything suspicious Deleted the trojan from Programs/ Went through System Volume Information to find a 2 month old explorer.exe and replaced the possibly infected one. There are no suspicious processes running now anymore (no duplicate explorer.exe) and nothing wants to connect this trojan owners sever either. I checked my system with several anti-malware programs too. What the trojan did: Started a second explorer.exe Always when I deleted the main trojan exe file it was reproduced (by the second explorer.exe) Always when I deleted the autostart entry it was reproduced by the explorer.exe too. When I terminated the suspicious explorer.exe, which used only half as much memory as the less suspicious one from Windows, a strange thing that I know from the computers in my Informatics class happened: A window popped up in the top left of my explorer-less desktop, titled "Personal settings for ... are ..." that obviously copied some files. Then both explorer.exes started again and the trojan was everywhere again. What did the trojan actually do to get explorer to rescue it? Is my PC clean of this newish trojan now? What are the other locations I should check for the trojan? The trjoan doesn't seem very high-level, could it have changed other system files or is the autostart entry vital for it? Thanks in advance, Your trojan paranoid friend (Getting linux in a week)

    Read the article

  • Server taking too long to respond error

    - by DCJones
    Hi, This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0

    Read the article

  • Ubuntu 12.04 VirtualBox on powerful W7 quite slow

    - by wnstnsmth
    I own a Thinkpad T420s with 8GB RAM, 160 GB SSD and a quite fast i7 processor. Summa summarum a very fast computer that works perfectly. Now, I am not very impressed by the performance of my Ubuntu 12.04 virtual machine running on VirtualBox 4.1.18. I assume that Virtual Machines are always a bit slower than the guest system, still I think it should be more performant given the hardware settings I give it: 4096 MB RAM 1 CPU without CPU limitation (I would like to give it more but then it does not seem to work - I am not experienced in this maybe somebody could give me advice on this too) Activated PAE/NX, VT-x/AMD-V and Nested Paging 96 MB Graphics Memory (no 2D or 3D acceleration) ~ 14 GB disk space, currently about 7 GB are used Maybe I misconfigured something, could you give me a hint please? Thanks! Edit: What I mean by slow is that for example switching tabs in the browser (whether FF or Chrome) only goes with a 0.5s delay or something, as well as switching application windows and/or double-clicking applications in the dock to get all open windows.. opening Aptana takes about a minute whereas opening something like Photoshop on the guest system takes 5 seconds

    Read the article

  • Issues with "There is already an object named 'xxx' in the database'

    - by Hoser
    I'm fairly new to SQL so this may be an easy mistake, but I haven't been able to find a solid solution anywhere else. Problem is whenever I try to use my temp table, it tells me it cannot be used because there is already an object with that name. I frequently try switching up the names, and sometimes it'll let me work with the table for a little while, but it never lasts for long. Am I dropping the table incorrectly? Also, I've had people suggest to just use a permanent table, but this database does not allow me to do that. create table #RandomTableName(NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue decimal) select vPerformanceRule.ObjectName, vPerformanceRule.CounterName, Perf.vPerfRaw.SampleValue into #RandomTableName from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfRaw where (ObjectName like 'Processor' AND CounterName like '% Processor Time') OR(ObjectName like 'System' AND CounterName like 'Processor Queue Length') OR(ObjectName like 'Memory' AND CounterName like 'Pages/Sec') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk Queue Length') OR(ObjectName like 'Physical Disk' AND CounterName like 'Avg. Disk sec/Read') OR(ObjectName like 'Physical Disk' and CounterName like '% Disk Time') OR(ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 70 AND SampleValue < 100) order by ObjectName, SampleValue drop table #RandomTableName

    Read the article

  • How to fix Windows 7 when System Recovery Options hangs?

    - by seansand
    The battery power ran out on my HP G60 laptop and it shut down. Even after recharging, Windows 7 will now not start up. After any attempted startup, it bluescreens and takes me to the "Startup Repair (recommended)" / "Start Windows Normally" console screen. "Startup Repair (recommended)" appears to be the right choice, but when I choose it, I get taken to a screen which appears to be System Recovery Options (it's the same wallpaper as the screenshots here: http://www.sevenforums.com/tutorials/668-system-recovery-options.html). However, I just get a cursor with nothing else; no "System Recovery Options" window ever pops up. (A black console screen does pop up for a split-second but too fast to be able to read the text.) The empty screen with cursor hangs indefinitely. System Recovery Options normally runs off of a partition on the laptop hard drive. When I got the laptop, I also created a System Repair Disc (in fact I have more than one) and when I try use any of them; they all result in the same wallpaper and empty screen with lone cursor. Ctrl-Alt-Del does nothing. The computer did not come with a Windows 7 installation disc, so there's no obvious way to reinstall Windows 7. Safe mode does not work; startup fails and I just get sent back to the "Startup Repair (recommended)"/"Start Windows Normally" console screen. "Start in last good state" does not work either, same result as above. Running a memory & hard disk check found no errors. Do I have any options at all? "System Recovery Options" seems to be what I want, but the screen that is supposed to take me to them just hangs.

    Read the article

  • Find out what fonts are being sent to a printer

    - by user38307
    I have an issue where two computers running XP and with identical print drivers have different behavior printing over parallel port to receipt printers. For one type of receipt, receipt printing is instant. For another kind printing is delayed by ten seconds on most machines but not on the other. This happens even if I swap out printers. I believe the delay is because this computer has a different set of fonts installed. (It is used for graphic design.) The printers have built-in fonts, and if you do not use one of the built-in fonts the printer has to build up an image in memory rather than just spitting out its fonts. For a particular kind of receipt with special fonts on a particular computer the computer is sending a font which the receipt printer does not have built in. My question is, is there a way to find out what fonts are being sent to the printer? This would let me narrow down what I need to modify in the Windows font folder. Thank you!

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Cisco 7206vxr cpu reducing

    - by naimson
    I have a 7206VXR (NPE-G2) . At the rate of 140 kpps i gain 80% of cpu . So i looking for ways how to reduce it? So i want to turn off netflow(but don't want to this,monitoring is highly important for me), but it will give me only 10-20% ? At this moment with 84kpps i have 58% sh processes cpu sorted give me this. PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 109 163534600 537236763 304 35.38% 32.83% 16.85% 0 IP Input 67 829396 52280 15864 0.15% 0.01% 0.00% 0 Per-minute Jobs 68 5542736 3053476 1815 0.15% 0.18% 0.16% 0 Per-Second Jobs 51 635852 1116315 569 0.07% 0.03% 0.02% 0 Net Background 329 120396 4607274 26 0.07% 0.00% 0.00% 0 EIGRP-IPv4 Hello 105 50508 95032488 0 0.07% 0.05% 0.05% 0 IPAM Manager 6 4068580 476916 8531 0.00% 0.07% 0.05% 0 Check heaps 7 7768 3634 2137 0.00% 0.00% 0.00% 0 Pool Manager 8 0 1 0 0.00% 0.00% 0.00% 0 DiscardQ Backgro 10 8 708 11 0.00% 0.00% 0.00% 0 WATCH_AFS 5 0 1 0 0.00% 0.00% 0.00% 0 RO Notify Timers 12 0 2 0 0.00% 0.00% 0.00% 0 ATM VC Auto Crea 9 0 2 0 0.00% 0.00% 0.00% 0 Timers 11 0 2 0 0.00% 0.00% 0.00% 0 ATM AutoVC Perio 13 296 610532 0 0.00% 0.00% 0.00% 0 IPC Event Notifi 16 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager 17 3584 2980311 1 0.00% 0.00% 0.00% 0 IPC Periodic Tim 4 0 1 0 0.00% 0.00% 0.00% 0 EDDRI_MAIN 19 0 1 0 0.00% 0.00% 0.00% 0 IPC Process leve 20 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat Manager 21 96 174453 0 0.00% 0.00% 0.00% 0 IPC Check Queue 14 4 50890 0 0.00% 0.00% 0.00% 0 IPC Dynamic Cach 3 0 1 0 0.00% 0.00% 0.00% 0 cpf_process_tpQ 24 756 305371 2 0.00% 0.00% 0.00% 0 IPC Keep Alive M 25 2340 610561 3 0.00% 0.00% 0.00% 0 IPC Loadometer 22 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat RX Cont 15 0 1 0 0.00% 0.00% 0.00% 0 IPC Session Serv 18 1620 2980310 0 0.00% 0.00% 0.00% 0 IPC Deferred Por 29 0 1 0 0.00% 0.00% 0.00% 0 Exception contro sh run(greped): http://pastie.org/5483194 Hardware: c7200p-adventerprisek9-mz.151-4.M1.bin Cisco 7206VXR (NPE-G2) processor (revision A) with 917504K/65536K bytes of memory. Processor board ID 2xxxxxxx MPC7448 CPU at 1666Mhz, Implementation 0, Rev 2.2 6 slot VXR midplane, Version 2.1

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Using Monit to monitor Resque

    - by Alex
    I'm trying to use resque as a job runner for Rails. I've tried this config, and many other ways of demonizing the rescue task (because running rake resque:work leaves the terminal tied to that command). Unfortunately, their example configuration doesn't work for me. Does the configuration look correct? Or is there another way to turn the process into a daemon? Thank you :) check process resque_worker_QUEUE with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid start program = "/bin/sh -c 'cd /data/APP_NAME/current; RAILS_ENV=production QUEUE=queue_name VERBOSE=1 nohup rake environment resque:work& > log/resque_worker_QUEUE.log && echo $! > tmp/pids/resque_worker_QUEUE.pid'" as uid deploy and gid deploy stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -s QUIT `cat tmp/pids/resque_worker_QUEUE.pid` && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'" if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >