Search Results

Search found 30258 results on 1211 pages for 'open ended'.

Page 412/1211 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Any way to bring my laptop battery back to life?

    - by Josh
    Recently my laptop battery will get extremely hot (definitely hotter than it should get) when I charge it. After that I usually end up removing it once it's fully charged to let it cool down, which takes a couple hours... Question is, is my battery dead? My last battery I had that died just ended up lasting 2 - 3 minutes on battery, no weird heat issues. And is there any way to possibly fix this? Probably not but I won't be able to get a replacement anytime soon. UPDATE: A few days ago when this happened and it cooled down, assuming it was fully charged, I ran my laptop on battery, and the battery life lasted about 10 minutes and then the laptop shutdown. I then plugged it in later and charged it back up, and for a while I had a orange light blinking on my laptop - which I assumed meant the battery was dead, especially since I got 10 minutes battery life. Then today, I turned my laptop on and was surprised to see that the battery was at 20% and charging (it's been plugged in since the incident above, so it should have been fully charged when I shut it off) I let it charge up, and as usual it got pretty hot around the time it was fully charged. So I turned my laptop off and pulled the battery out to let it cool down Now the thing is, just now I tried running it on battery, and it's been going for an hour now... so maybe its not dead? (also the orange light is no longer blinking...) Thanks in advance if anyone knows whats going on, and how to fix it, if its fixable =] EDIT: Some info if it helps... my laptop is about 2 years ago, and it's an Asus K50ID. I know laptop batteries usually don't last more than a year but I'm trying to keep this one going for as long as I can.

    Read the article

  • How small (spec wise) can a virtual machine be and still boot up and run some sort of OS?

    - by IllvilJa
    One of the advantages with virtual machines is that you can be very flexible with their sizes. If the host system permits it, you can have a very large virtual machine with a lot of virtual RAM and disk. Also, you can decide to go the other way around, to give the virtual machine a very modest amount of RAM and disk and then choose and configure the OS appropriately. The question is, how small virtual machines have people managed to setup (and get to both boot up and to run)? Virtual machines doing something usuful is preferable, even if I know "useful" in this context is awfully subjective, but laboratory-cases with a configuration stripped beyond common sense could be intresting as well, just to see what people manage to boot and run. Quite open ended question and quite academic, but think of it: an extremely small VM (which still does something useful) takes very little memory and disk and can be quite quickly saved to and restored from disk. If it's also gentle on CPU resources, one might consider having a huge number of such VMs up and running on a host. (Imagine a VM running just an old Commodore 64 or Commodore Amiga in it. Ok, way wrong CPU architecture for modern Virtualization software running on a x86-based PC but still an interesting thought. You could have quite a few such small VMs running on a modern PC.)

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • WS 2008 R2 giving "Internal Server Error"

    - by dragon112
    I have had this problem for a while now and can't find the problem at all. When i open a page it will sometimes give a 500 Internal Server Error message. This hapens on a website that works perfectly but when i try to upload anything it will give this message(all php settings have been set to either 1gb or 3000 seconds as well as the iis headers). Also when i open a simple page which does nothing more than include another php page and include a couple of classes the error will occur. I have no idea what causes this error and would love to hear from any of you on what this could be. I checked the server logs and for the upload issue i found this error: The description for Event ID 1 from source named cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found the message resource is present but the message is not found in the string/message table Regards, Dragon

    Read the article

  • Changing dual-monitor settings without closing the laptop lid on OS X

    - by hekevintran
    I have a Unibody MacBook hooked up to an external display. By default when I boot up, the system will go to dual-monitor mode. I want to use only the external display. The Apple supplied solution to this problem is to close the lid of the laptop which puts the machine into sleep mode and then move the mouse around to wake it up again. Because the machine is being woken up with the lid closed, when the displays are detected the system finds only the external. After the system is functional again, you can open the lid if you want and the laptop screen will be non-functional until you either tell the system to detect displays from the system preferences or you turn off the external display. Every time I want to use only the external display, I must reach my hand over to close the lid, wait for the machine to sleep, jigger the mouse, wait for the machine to wake up, and finally open the lid again because I don't want the machine to overheat. I feel that this is very stupid to have to do. Why is there no button or menu option that says "don't use this screen"? Is there any third-party software way to change the screen setup that does not involve physically closing the lid and playing a game of "are you sleeping" in order to switch such a simple software setting? We are in the 21st Century and honestly this is childish.

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • Using IIS7 as a reverse proxy

    - by Jon
    My question is pretty much identical to the question listed but they did not get an answer as they ended up using Linux as the reverse proxy. http://serverfault.com/questions/55309/using-iis7-as-a-reverse-proxy I need to have IIS the main site and linux (Apache) being the proxied site(s). so I have site1.com (IIS7) site2.com (Linux Apache) they have subdomains of sub1.site1.com sub2.site1.com sub3.site2.com I want all traffic to go to site1.com and to say anything that is site2.com should be proxied to linux box on internal network, (believe ARR can do this but not sure how). I can not have it running as Apache doing the proxying as I need IIS exposed directly. any and all advice would be great. EDIT I think this might help me: <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^cto\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="^antoniochagoury\.com$" /> <add input="{HTTP_HOST}" negate="true" pattern="www.antoniochagoury\.com$" /> </conditions> <action type="Redirect" url="http://www.cto20.com/{R:1}" redirectType="Permanent" /> </rule> from: http://www.cto20.com/post/Tips-Tricks-3-URL-Rewriting-Rules-Everyone-Should-Use.aspx I will have a look at this when I have access to the IIS7 box. Thanks

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Dropbox failing to update?

    - by Mick
    I have two PC's at home. One XP the other Windows 7-64bit. I have a word document that I edit on either PC - to enable this I have the file stored in a Dropbox folder. This usually works fine - but sometimes I find that the file does not get updated on my windows7 PC. I.e. I edit the file on my Windows XP machine, then go to my Windows 7 PC and see that there is a previous, old, datestamp on the file and sure enough if I open up the file to have a look, I see that the latest edits are not included. If I right-click on the file and select "Browse on Dropbox website" I see that the latest file is correctly there. Surely there must be some option to say please update this file - but can find no such thing. Has something gone wrong? I should point out that my wireless internet connection is a little intermittent - could this have caused some glitch? By the way I do not leave the old file open in word on my Windows-7 PC as I can well imagine that would cause trouble. Also I should mention that the icon on the document has the little green tick on it showing that Dropbox is not in the process of doing a transfer. Also the Dropbox icon in my system tray also has the green tick - so Dropbox is not busy transferring some other file(s). If I hover the mouse over the Dropbox icon I get the tooltip "All files up to date".

    Read the article

  • Can't connect to public WiFi with MacBookPro at coffee shops and libraries

    - by Nathan Bowers
    The Problem: I can't connect to public, unencrypted WiFi at my local public library or Peets Coffee. My Setup: Late 2006 MacBookPro running 10.5.8. I have Parallels installed. It's supposed to work like this: 1) Connect to their unencrypted WiFi network 2) Open a browser which redirects you to their "enter password/agree to terms" page. 3) Browse normally. I can connect to the WiFi network, but when I try to authenticate I always get stuck in a redirect loop. It's been like this for a while. Even before I upgraded to 10.5.8. I never have trouble with encrypted networks or regular open WiFi. What I've tried: Disabling Parallels connections in Network Prefs. Superstition: somehow Parallels installed something in the network stack that's messing me up. Pinging the IP address of the WiFi node I'm connected to. I can ping it, it's there, but I still get stuck in this authentication redirect loop. Tried different browsers, tried different cookie and security settings. Even tried IE under Parallels. No dice. Tried flushing DNS cache. Asked library and coffee employees for help. It didn't go well. My Question: Anybody else have this problem? What should I be looking for?

    Read the article

  • Nginx proxy with Redmine SVN authentication.

    - by Omegaice
    I am attempting to setup a system where I have an nginx server running as a reverse proxy for multiple websites that I want to run. To separate the websites I have created a Linux container which contains each site to allow me to reduce conflicts in database usage etc. I am currently trying to get my main site working and have nginx with passenger setup and connecting to redmine and I have an Apache install specifically setup for serving the SVN over HTTP and am attempting to use the redmine authentication with that. I have set everything up as described in the redmine howtos, but when I check a project out from the SVN it always works even if the project is private and whenever I try and commit to the repositories it fails saying "Could not open the requested SVN filesystem", the Apache error log related to that event is "(20014)Internal error: Can't open file '/srv/rcs/svn/error/format': No such file or directory". If I take out the redmine authentication I can checkout and check-in repositories fine but there is no authentication. Does anyone have any ideas? Edit I tried to solve this problem another way by attempting to have the authentication work by LDAP, I managed to get it so that my user could log into the redmine website but as soon as I tried to check anything out it said that access was forbidden to the repository.

    Read the article

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Can't get port based virtual hosts working in Apache2.2 CentOS 5.2, Plesk 8.6

    - by soopadoubled
    I have installed Google Sitemap Generator on my CentOS server, which is running plesk 8.6. Google Sitemap Generator adds an include to an external conf in my httpd.conf as follows: Listen 8181 NameVirtualHost *:8181 <VirtualHost *:8181> DocumentRoot "/usr/local/google-sitemap-generator/admin-console" ScriptAlias /cgi-bin/ "/usr/local/google-sitemap-generator/admin-console/cgi-bin/" <Directory "/usr/local/google-sitemap-generator/admin-console"> Allow from all Options ExecCGI DirectoryIndex index.html </Directory> </VirtualHost> LoadModule google_sitemap_generator_module /usr/local/google-sitemap-generator/lib/mod_sitemap.so After installation I should be able to navigate to myserverip:8181 and access the GSG console. Unfortunately my browser throws up "Safari can’t open the page “http://myserverip:8181/” because the server where this page is located isn’t responding." I've checked the port with netstat and nmap, and it's open and listening. I've added a rule to allow traffic on 8181 in iptables, but no joy. Is there anything obvious I could be missing? Any ideas would be greatly appreciated. Cheers, Ian

    Read the article

  • file association in Windows 8

    - by Robith Nuriel Haq
    Associating a file to a program should be easy in Windows. However, i find it rather difficult when I'm working on Windows 8. associating a file to a desktop application that I install on my computer is easy because the whole operation is entirely the same with that in the previous Windows releases. What I find rather difficult is associating a file to a Metro apps that I download from the store. So far, I have been using Multimedia 8 (a metro app) to open my video files. However, this app cannot handle particular files-like *.dat-that can be associated easily to desktop video applications, such as media player classic and the like. When I try to associate my DAT files to Multimedia 8, there is indeed a "look for another app on this PC" option at the bottom of the "open with" pop up. But alas, I cannot figure out how to locate my Multimedia 8 app to which I want to associate my DAT file (as well as other video files that are not yet associated to this metro app). If anyone of you knows how to locate those metro apps, please tell me. many thanks

    Read the article

  • Windows XP consuming drive letters

    - by billdehaan
    This one's a bit of a stumper. I'm running XP SP3, current with all fixes, etc. My problem is that I can assign a drive letter to a container file (explained below), it works just fine. But once I close the container, the drive letter is no longer available until the next boot. I've got some confidential data that I've placed in a container volume. I've used TrueCrypt (www.truecrypt.com) and FreeOTFE (www.freeotfe.org), with both installed and portable versions for both, with the same result. I open the container file, assign it to a drive letter (say R:), and run some portable apps that are within the volume. When I'm done, I close the container, and the drive letter is released. Fine so far. However, when I attempt to re-open it, the previous drive letter (in this case R:) is no longer available. It's not mapped to anything, it's just unavailable. Even attempting something like "subst R: C:\" returns "Invalid Parameter - R:". I can use the S: drive, no problem, but the next day I have to use T:, then U:, etc. Eventually, I have to reboot to reclaim all of of the drive letters. Unfortunately, everything I've read about drive letters relates to USB assignments, which doesn't apply here. I've tried the "show hidden" command (set devmgr_show_nonpresent_devices=1) with no success. And the Disk Management tool doesn't apply either, since it's not a physical drive. Does anyone know where Windows keeps the list of drive letters? And is there anything short of a reboot that can be used to reset it?

    Read the article

  • Is there any limit to AIX 5.3 pipe size ?

    - by snowflake
    Hello, I'm in trouble while performing cat/tail/head operation on large files on Aix 5.3. When asking for a cat of several 1Go file redirected to another one: cat file1 file2 file3 > outputfile The outputfile is limited to 2Go (cat: output error and result file is 2147483647 bytes) Filesystem is jfs2. I successfully uploaded through ftp 10Go files on the filesystem without problem. I found nothing relevant in etc/security/limits: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a core file size (blocks) unlimited data seg size (kbytes) 245759 file size (blocks) unlimited max memory size (kbytes) unlimited open files 2000 pipe size (512 bytes) 64 stack size (kbytes) 32768 cpu time (seconds) unlimited max user processes 2048 virtual memory (kbytes) 278527 The problem does not occur on another AIX 5.3 server, I'm just looking for a different configuration that might be the source of the problem. /etc/security/limits on the server without the problem: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a on the server without the problem: core file size (blocks, -c) 1048575 data seg size (kbytes, -d) 131072 file size (blocks, -f) unlimited max memory size (kbytes, -m) 32768 open files (-n) 20000 pipe size (512 bytes, -p) 64 stack size (kbytes, -s) 32768 cpu time (seconds, -t) unlimited max user processes (-u) 262144 virtual memory (kbytes, -v) unlimited

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • My desktop has started overheating -- how hot is hot?

    - by Jerry
    I have a two year old desktop, some random quad core HP desktop. It used to run very quietly, but in the past month, the fans start up anytime anything "serious" is being done -- compiles, playing video, etc. Right now, speedfan and speccy report the cores are between 50C and 70C. Speedfan reports this as hot. (Nice flame icon.) Well, the system does sit on my carpet, so two weeks ago, I took off the lid, and cough *cough* it was pretty filled with dust. I got out an air can, turned on a vacuum and carefully got out all the dust that I saw on the CPU fan the case fans any fan I saw (graphics board) and blew out all the dust I could from all the circuit boards. And then I closed the case back up. It has definitely run cooler since then, but it still runs hot, and I hear high speed fan noise I never heard before. How hot is too hot? At what temps do consumer grade CPUs die? What should I be looking to do? Replace CPU fan? (It seems to work) Replace power supply fan? Assuming the dust problem is gone, where should I be looking to determine why the machine is heating up? Epilogue: After following the various pieces of advice given here, the system did run cooler, but it was still noticeably running louder (hotter) than just a few months prior. I ended up purchasing a new cpu heatsink and fan and during installation found the cooling grease from the original heatsink was just a dried, cracked layer, probably more of an insulator than heat transfer agent. With the new fan AND the new heatsink compound, the system ran much much cooler and the fan rarely turns on.

    Read the article

  • Word 2013 can't compare readonly files

    - by Moshe Katz
    I am using Tortoise SVN to work with a repository that contains some documentation saved as Word documents. On my old computer, with Office 2010, I was able to compare with previous revisions. Tortoise would open Word in compare view so I could see the differences between the files. I have installed Office 2013 (final version from Technet, not the preview version) on my new laptop for testing and now I can no longer compare Word Documents. Tortoise pops up a generic error that it was unable to compare the two files. Tortoise uses a JScript file to interface with Word, so I ran that file through a debugger and found that the actual error is: The Compare method or property is not available because this command is not available for reading. Some Googling followed by some testing revealed that the error is caused by the first file opened (in this case, the previous version) being opened as Read-Only. If I change the JScript code to open in normal mode, and I find the file on the system and un-check the "Read Only" property (if necessary), then the comparison opens as expected. I was unable to find any documentation about this change to Word on any Microsoft site. Does anyone know why this has been changed, and if it is intentional and not a bug, what the benefit is of requiring the file to be writable in order to compare it with another? Note: This is tagged word-2013-preview but it is actually for the release version of Word that is available on MSDN and Technet. I do not have enough rep. on this site to create new tags (yet).

    Read the article

  • Primary zone will not transfer to secondary zone

    - by Matt Beckman
    Using DNS on Windows Server 2008, there is a constant struggle with adding primary and secondary zones. I will add a primary zone to NS1 for a new domain, edit it as needed, and when it's ready add the secondary zone to NS2. However, MOST of the time, the secondary zone remains in an error state, and will never acquire the primary zone data. I have gone back to domains a few weeks after adding them to find out that Windows never propagated the change. Annoying. Anyway, I recently updated SP1 to SP2 thinking this would help, but it hasn't. I added two new domains today, and spent an hour after the secondary zone would just not sync. During that time, the only error in the logs I had seen was for one of them where DNS complained about not being authoritative. In order to eventually resolve the issue, I ended up deleting the primary zone, creating a new primary zone, and hitting "Apply" after each and every field change. For example, after modifying the serial number from "1" to a date appropriate "2010093001", I hit apply, and then the Primary Server (apply), Responsible Person (apply), and finally Name Servers (apply). After I did this, the secondary zone didn't waste any time getting the data. Ideas?

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • Can't install PHP after apt-get dist-upgrade

    - by WASD42
    I had a server with perfectly running for months classical LAMP installation on Ubuntu 8.04: Linux localhost 2.6.24-23-generic #1 SMP Wed Apr 1 21:47:28 UTC 2009 i686 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=8.04 DISTRIB_CODENAME=hardy DISTRIB_DESCRIPTION="Ubuntu 8.04.4 LTS" Don't know why I've started apt-get update, apt-get upgrade but everything ended with apt-get dist-upgrade :) Everything gone alright... But now I can't start nor Apache, nor PHP, because PHP was simply deleted. When I'm trying to install it: > apt-get install php5 <...> The following packages have unmet dependencies: php5: Depends: libapache2-mod-php5 (>= 5.2.4-2ubuntu5.17) but it is not going to be installed or php5-cgi (>= 5.2.4-2ubuntu5.17) but it is not going to be installed E: Broken packages When I'm trying to install libapache2-mod-php5: The following packages have unmet dependencies: libapache2-mod-php5: Depends: php5-common (= 5.2.4-2ubuntu5.17) but 5.3.6-6~dotdeb.1 is to be installed E: Broken packages I don't know what 5.3.6-6~dotdeb.1 is and where is this package, because I've already removed dotdeb repository from APT sources :/ Tried to do apt-get update, apt-get upgrade, apt-get install php5 php5-common php5-cli with no success... Don't know what to try next :(

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >