Daily Archives

Articles indexed Monday March 26 2012

Page 1/17 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Damaged XenServer Storage LVM partition table

    - by Fiolek
    I have a homeserver running under XenServer control with 3x1TB discs inside, one for XenServer and two mirrored(using Intel's fakeRAID and dmraid) for VMs and a user data(but now I think RAID didn't work). I tried to pass PCI card to VM using PCI-passthroug and I read somewhere that I need to recompile kernel with pciback module but something went wrong(I made mistake in /boot/extlinux.conf and server couldn't run) and I had to use LiveCD of GPartEd(I already had it on USB key) to correct this. But when I re-run the server all VDIs were gone. I have completly no idea what could go wrong. I tried to repair RAID using dmraid -R in the hope that everything will return to noramal but now I think this done more bad than good(and corrupted rest of LVM table...). Is there any possibility to recover this SR or only data from one(~100GB) of VDI? I also wants to apologise for my English, I'm not from English-speaking country and I'm only 16 years old, so I hadn't "time" to learn it(school isn't good place to do this) in sufficient way.

    Read the article

  • How to specify search domain name of nginx resolver for proxy_pass

    - by myjpa
    Assuming my server is www.mydomain.com, on Nginx 1.0.6 I'm trying to proxy all request to http://www.mydomain.com/fetch to other hosts, the destination URL is specified as a GET parameter named "url". For instance, when user requests either one: http://www.mydomain.com/fetch?url=http://another-server.mydomain.com/foo/bar http://www.mydomain.com/fetch?url=http://another-server/foo/bar it should be proxyed to http://another-server.mydomain.com/foo/bar I'm using the following nginx config and it works fine only if the url paramter contains domain name, like http://another-server.mydomain.com/...; but fails on http://another-server/... on error: another-server could not be resolved (3: Host not found) nginx.conf is: http { ... # the DNS server resolver 171.10.129.16; server { listen 80; server_name localhost; root /path/to/site/root; location = /fetch { proxy_pass $arg_url; } } Here, I'd like to resolve all URL without domain name as host name in mydomain.com, in /etc/resolv.conf, it's possible to specify default search domain name for the whole Linux system, but it doesn't affect nginx resolver: search mydomain.com Is it possible in Nginx? Or alternatively, how to "rewrite" the url parameter so that I can add the domain name?

    Read the article

  • DNAT to 127.0.0.1 with iptables / Destination access control for transparent SOCKS proxy

    - by cdauth
    I have a server running on my local network that acts as a router for the computers in my network. I want to achieve now that outgoing TCP requests to certain IP addresses are tunnelled through an SSH connection, without giving the people from my network the possibility to use that SSH tunnel to connect to arbitrary hosts. The approach I had in mind until now was to have an instance of redsocks listening on localhost and to redirect all outgoing requests to the IP addresses I want to divert to that redsocks instance. I added the following iptables rule: iptables -t nat -A PREROUTING -p tcp -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:12345 Apparently, the Linux kernel considers packets coming from a non-127.0.0.0/8 address to an 127.0.0.0/8 address as “Martian packets” and drops them. What worked, though, was to have redsocks listen on eth0 instead of lo and then have iptables DNAT the packets to the eth0 address instead (or using a REDIRECT rule). The problem about this is that then every computer on my network can use the redsocks instance to connect to every host on the internet, but I want to limit its usage to a certain set of IP addresses only. Is there any way to make iptables DNAT packets to 127.0.0.1? Otherwise, does anyone have an idea how I could achieve my goal without opening up the tunnel to everyone? Update: I have also tried to change the source of the packets, without any success: iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 1.2.3.4 -j SNAT --to-source 127.0.0.1 iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 127.0.0.1 -j SNAT --to-source 127.0.0.1

    Read the article

  • Windows 8 and SMB2 Issues

    - by Rhys Paterson
    We're playing with the consumer preview of Windows 8 and having issues accessing some network shares in our environment. Basically, when I attempt to access a share directly (\[SERVER].[DOMAIN].[NETWORK]\Share$) I get 'An extended error has occured'. The shares reside on an EMC Celerra system. Sorry, I don't really have much more information about it (this is just a little side project). Accessing shares that reside on Windows machines are fine. The Firewall is completley disabled and I am running under full domain administrative credentials. A quick wireshark shows the following group of packets between myself and the server: SMB2 164 NegotiateProtocol Request SMB2 274 NegotiateProtocol Response SMB2 981 SessionSetup Request SMB2 281 SessionSetup Response SMB2 200 TreeConnect Request Tree: \\[SERVER].[DOMAIN].[NETWORK]\[SHARE]$ SMB2 138 TreeConnect Response SMB2 202 Ioctl Request NETWORK_FILE_SYSTEM Function:0x0080 SMB2 131 Ioctl Response, Error: STATUS_INVALID_DEVICE_REQUEST SMB2 126 SessionLogoff Request SMB2 126 SessionLogoff Respons This repeats five times and then (I assume) Windows throws me the above error. A quick Google shows me: 0xC0000010 STATUS_INVALID_DEVICE_REQUEST The specified request is not a valid operation for the target device. Which shows me that NETWORK_FILE_SYSTEM Function:0x0080 request is invalid.. Does anyone know what would cause this? Thanks in advance. Rhys. Edit: FYI - as a workaround, you can disable SMB 2.2 as noted in the EMC thread: sc config lanmanworkstation depend= bowser/mrxsmb10/nsi sc config mrxsmb20 start= disabled This will allow the machine to access the shares. The below answer still stands though :)

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • What happens when you uninstall a per-user installation?

    - by CraigJ
    What happens if an MSI installation is set to install as per-user, and 3 different users log on and each install the app? Will Windows Installer recognise that the same MSI has already been installed into Program Files and therefore it doesn't need to install it again? What happens if one of the 3 users then uninstalls the app while they are logged in? Will Windows Installer recognise that 2 other users still need the app to be installed and therefore leave alone the app folder in Program Files?

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • Active Directory Domain Services - Network Name Cannot be Found

    - by Arief
    I have really weird problem that I could not explain. I am trying to redirect all users home folder to the new server. I have copied all the files including the permissions to the new server. All I need to do is to update the user profile for home folder by changing the server name. However, I got this message when I enter the new server name: My server that serving as AD can resolve the name by ping and nslookup of the server name. The only thing that I don't understand why the MMC cannot resolve the name. I did change with the IP Address and I still get the same error message. Thank you so much for your help. UPDATE: I know what seems to be a problem, but I don't know how to fix it. The new server that will serve all Home folder is actually sitting in the cloud with different IP Address as the Domain Controller. The Domain Controller is sitting locally in the office with 10.0.0.0/24 IP Addresses. The new server that is sitting in on Data Centre is on 172.10.10.10/24 IP Addresses. The static route has been set up on both end, and the DNS as well. I believe this is the issue. Does anyone how to overcome this situation? Thank you.

    Read the article

  • Ubuntu only boots with USB plugged in

    - by Ben
    I'm new to the Linux world so please bear with me! :-) I installed Ubuntu via USB drive onto my hard drive. If I boot the PC without the usb drive I used, Ubuntu will not load. After booting I can unplug without any consequences. I looked on the hard drive and there is a boot folder. On the USB drive, this is the tree contents: /media/disk$ tree . |-- adtext.cfg |-- boot.cat |-- f10.txt |-- f1.txt |-- f2.txt |-- f3.txt |-- f4.txt |-- f5.txt |-- f6.txt |-- f7.txt |-- f8.txt |-- f9.txt |-- initrd.gz |-- isolinux.bin |-- isolinux.cfg |-- ldlinux.sys |-- linux |-- menu.c32 |-- menu.cfg |-- po4a.cfg |-- prompt.cfg |-- splash.png |-- stdmenu.cfg |-- syslinux.cfg |-- text.cfg |-- ubnfilel.txt |-- ubnpathl.txt `-- vesamenu.c32 Am I correct in my assumption that the boot aspect is associated to the USB drive? If so, how do I get it to boot without the USB? I'm guessing copying into some location and modifying grub?

    Read the article

  • Synergy to share mouse only (not keyboard)

    - by Stan
    Synergy is very handy for single mouse/keyboard set sharing between multiple PCs. But I have a laptop (WinXP SP3) and a desktop (OS X Snow Leopard), a mouse, a keyboard which plugs to desktop. I don't need to share keyboard since each has one. Is there any way to share mouse only? Since if keyboard sharing enable might cause inconvenience. Thanks. PS. Another quick question is how to stop synergy daemon on Mac OS? synergys --help doesn't give any information.

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • File Saving Sometimes Fails

    - by YellPika
    When I attempt to save files, it sometimes (randomly) fails. In Blender, I sometimes get "Version Backup Failed: File Saved With @". In Visual Studio, building sometimes fails with an error message indicating that the target file/exe cannot be overwritten. If I wait a bit, I can save fine. It's almost as if programs are taking an abnormal amount of time to 'let go' of the files. What could be causing this behaviour? This seems to be caused by Windows Live Mesh monitoring my files, and locking them whenever it uploads the new versions (BAD considering the amount of times I save my files, even redundantly). Any suggestions to work around this behaviour? Should I switch to a better service to sync my files?

    Read the article

  • Dell XPS 420: Turning computer on, video ports showing static

    - by Roy Rico
    I have a Dell XPS 420 which i recently upgraded with a EVGA GTX 550TI and a 80GB SSD Drive. I converted this machine into a home theater PC. Which I've always had on. The computer wasn't responding, and so i restarted it and when it started, video ports showing static. I was getting static on my TV thru both DVI ports and the mini HDMI port. I assumed my video card had gone bad, but i completely replaced it and still the same behavior is occurring. I do not know how to diagnosis this issue. Any thoughts?

    Read the article

  • Are there any Pandora/Slacker like applications to create stations/play lists from personal mp3 library?

    - by Randy K
    I'm interested in having playlists created automatically much in the same way that Pandora and Slacker Radio "create" radio channels. I understand that iTunes has the Genius feature, but this requires that the files have been encoded with iTunes, which is not the case with my music, among other reasons I'm not considering iTunes. I'm running an Windows environment, but would consider Linux options as it would give me a reason to learn more about Linux. In the end the music files and play lists will end up on my Android phone and tablets. Working with Amazon's Cloud Player would be nice, but not required.

    Read the article

  • How do i have 4GB of video memory with a 1GB video card?

    - by Tomas Spc Yaczik
    I ran the direct X diagnostic tool to take a look at my graphics capabilities and its telling me that the approximate video memory is clocked in at 4096MB. That doesn't make any sense, how is that possible? The only things that i can think of were that DirectX was inaccurate so i looked at the graphics statistics of my computer on CPU-Z which was included with the motherboard and that's telling me i have "negative" 1988MB. Not only is that not 4096MB, but its giving me a negative number and i know for a fact that you can't have a negative amount of memory???!!! The only things i can think of that would be amplifying my video memory output is the PCIe 3.0 bus, or that the motherboard is somehow including the on board video chip set with the video card which only then is only 2GB of video memory, which still has to be being amplified by something. Any suggestions?

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Issue tabbing between fields in Tiddlywiki

    - by leeand00
    TAB and SHIFT + TAB are great for getting in and out of fields without taking your hands off the keyboard. In my Firefox 9.0 installation I installed and then disabled Tab in Textarea 0.10.2 (tabinta) and now when I try to tab in and out of a textarea on the page using TAB and SHIFT + TAB it gives me a tab instead of the expected movement in and out of the textarea. Is there some way I can get this functionality back to normal?

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • How do you get Windows 7 to show time remaining in the battery meter?

    - by MrDaniel
    Running Microsoft Windows 7 Home Premium on a HP Laptop. The system tray power meter never shows the time remaining in the system tray. Only really ever show a percentage remaining number as pictured. The windows help documentation on the "battery meter" seems to indicate that it should display a time remaining indicator, is this accurate? How accurate is the battery meter? The accuracy of what the battery meter reports—what percentage of a full charge remains and how long you can use your laptop before you must plug it in—depends on several factors. Most of these factors fall into the following two categories: What you use the laptop for. Because some activities drain the battery faster than others (for example, watching a DVD consumes more power than reading and writing e-mail), alternating between activities that have significantly different power requirements changes the rate at which your laptop uses battery power. This can vary the estimate of how much battery charge remains. Battery hardware and sensor circuitry. Newer, "smart" batteries are equipped with circuitry that calculates the measurements of charge remaining and reports the information to the battery meter. Older batteries use less sophisticated circuitry and might be less accurate.

    Read the article

  • I2C_SLAVE ioctl in i2c linux driver

    - by zac
    I am supposed to write a simple write and read program for i2c but the problem is that i dont have the device at hand presently to test it so i need my code to be perfect. I am confused about the function of the I2C_SLAVE ioctl.From what i read,this ioctl is used to set the slave address. But we pass the slave address again when performing read/write using ioctl I2C_RDWR via addr in the structure i2c_msg. So then,what is the function of I2C_SLAVE command?Do i need to call it every time i perform a read or write operation? Thank you in advance.

    Read the article

  • Windows 7 laptop's volume with headphones way too loud

    - by jcao219
    I have two headphones, neither has any amps or volume control, and both have the same problem with this computer. I normally have the volume on 10%, and that is optimal. I'm still very afraid that someday it will be at 100% and i'll accidentally have my headphones on. I tested how loud it is at maximum volume -- it's earsplittingly loud! In fact, it's so loud that if I lay the headphones down on the table, I can listen to a song perfectly clearly from meters away. Is there any way to make the volume control more safe for the ears?

    Read the article

  • Sync iCloud bookmarks with Firefox in Windows 7

    - by MDMarra
    I have four devices that I want to sync my bookmarks across: a Mac Mini, a Macbook Air, an iPad2, and a Windows 7 PC. I use Safari on the three Apple products and Firefox 8 on the Windows 7 PC. I use iCloud to sync my bookmarks with the three Apple devices, but this obviously leaves my Windows PC out of the loop. I realize that I can sync the bookmarks to IE and then constantly export them and import them into Firefox, but this is both clunky and not really a "sync" since things that I bookmark on my PC in Firefox won't show up on my other devices. What is the best way to sync my bookmarks from iCloud with a PC running Firefox 8?

    Read the article

  • Prevent important OS X process being swapped out, without code change?

    - by purplie
    I want hotkey-based utilities like Quicksilver or Zooom to respond immediately. But if they have been idle for a while, they (I guess) get swapped out, and respond slowly, sometimes not even responding to the first few keystrokes I wanted to send to them. How can I encourage such processes (i.e. chosen processes, not all processes system wide) to remain in active memory? Or, am I misunderstanding the problem?

    Read the article

  • Hints on diagnosing performance issue in OpenBSD firewall

    - by Tom
    My OpenBSD 4.6 pf firewall has started having really bad performance in the past few weeks. I've isolated the firewall (as opposed to the WAN connection, switch, cable, etc.) as the problem, but need a hint on how to further diagnose or fix the problem. The facts: Normal setup is: DSL Modem - FW Ext. NIC - FW Int. NIC - Switch - Laptop Normal setup described above gives only 25 Kbps! Plugging the laptop straight from the DSL modem gives a 1 MBps connection (full speed, as advertised). Therefore, the DSL connection seems to be OK. Plugging the laptop directly into the firewall's internal NIC (bypassing the switch) also gives only 25 Kbps. Therefore, the switch does not seem to be a problem. I've replaced the ethernet cables, but it didn't help. Here's the weird thing. Reloading the ruleset (/sbin/pfctl -Fa -f /etc/pf.conf) causes the laptop's connection to go up to 1 Mbps (i.e. full speed) for a few minutes before it gradually degrades back down to 25Kbps again. Any ideas on what's wrong or how I could further diagnose the problem?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >