Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1039/1338 | < Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >

  • Determining the Source of a Given File System Mount on Unix [migrated]

    - by phobos51594
    Background Recently I have run into a bit of a snag on my home FreeBSD server. I recently upgraded it to the latest stable release, and I have noticed some strange behavior with the /var partition. Originally, I had the system configured such that /var had its own partition with /var/run and /var/log in memory disks (/tmp, too). After the upgrade, I notice there is a new, fourth memory disk mounting directly to /var that I had not set up manually and is not in my fstab. It is only 28 megs or so in size and is causing problems when trying to update my ports collection. The ramdisk mounts atuomagically at boot and cannot be unmounted while in multi-user mode. If I drop to single user mode, I am able to unmount it without issue, however rebooting causes it to pop right back up. System specifications have been included at the end of the post. Question Is there any way to determine exactly what is mounting a given memory disk (or any filesystem, for that matter) after it has been mounted? Alternately, does anybody have any ideas what might have caused the new /var ramdisk to pop up? System Specification # uname -a FreeBSD sarge 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0: Thu Nov 22 14:02:13 PST 2012 donut@sarge:/usr/obj/usr/src/sys/GENERIC i386 # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/da0s1a 515612 410728 63636 87% / devfs 1 1 0 100% /dev /dev/da0s1d 515612 287616 186748 61% /var /dev/da0s1e 6667808 2292824 3841560 37% /usr /dev/md0 63004 32 57932 0% /tmp /dev/md1 3484 8 3200 0% /var/run /dev/md2 31260 8 28752 0% /var/log /dev/md3 31260 512 28248 2% /var <-- This # cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/da0s1a / ufs rw,noatime 1 1 /dev/da0s1d /var ufs rw,noatime 2 2 /dev/da0s1e /usr ufs rw,noatime 2 2 md /tmp mfs rw,-s64M,noatime 0 0 md /var/run mfs rw,-s4M,noatime 0 0 md /var/log mfs rw,-s32M,noatime 0 0 Thank you in advance for any assistance.

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Setting Up My Home Network

    - by Skizz
    I currently have five PCs at home, three running WinXP and two running Ubuntu. They are set up like this: ISP ----- Modem ---- Switch ---- Ubuntu1 -- B&W Printer | |--WinXP1 | |--WinXP2 Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) The Ubuntu1 machine is set up as a PDC using Samba and runs fetchmail, procmail, dovecot to get my e-mail and allow me to access the e-mail via imap so I can read the e-mail on any PC. I'd like to set up the network like this: ISP ----- Modem ---- Ubuntu1 ---- Switch ------WinXP1 | | |--WinXP2 B&W Printer Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) My questions are: How to configure Ubuntu1 to act as a firewall. How to configure Ubuntu1 to provide a consistant user authentication across the network, at the moment Samba provides roaming profiles for the XP machines but the Ubuntu2 machine has it's own user lists. I'd like to have a single authentication for both XP machines and linux machines so that users added to the server list will propagate to all PCs (i.e. new users can log on using any PC without modifying any of the client PCs). How to configure a linux client (Ubuntu2 above) to access files on the server (Ubuntu1), some of which are in user specific folders, effectively sharing /home/{user} per user (read and write access) and stuff like /home/media/photos with read access for everyone and limited write access. How to configure the XP machines (if it is different from a the Samba method). How to set up e-mail filtering. I'd like to have a whitelist/blacklist system for incoming e-mails for some of the e-mail accounts (mainly, my kids' accounts) with filtered e-mails being put into quaranteen until a sysadmin either adds the sender to a blacklist or whitelist. OK, that's a lot of stuff. For now, I don't want config files*, rather, what services / applications to use and how they interact. For example, LDAP could be used for authentication but what else would be useful to make the administration of the LDAP easier. Once I have a general idea for the overall configuration, I can ask other questions about the specifics. Skizz I have looked around for information, but most answers are usually in the form of abstract config files and lists of packages to install.

    Read the article

  • 2011 i7 Macbook Pro unable to boot from any Windows CD?

    - by Craig Otis
    I'm encountering issues installing Windows alongside my Lion install. I'm attempting to install from the internal SuperDrive, after using Boot Camp to partition what was a single, HFS+ volume. When holding down Option at boot, the CD appears in the startup list, but upon selecting it, I get a gray screen for 5 minutes, then a flashing white folder. I tried installing rEFIt and using this to boot the CD, but I receive an error about "Not Found" being returned from the "LocateDevicePath", and a mention of the firmware not supporting booting using legacy methods. In the Console, when opening the StartupDisk preference pane (which never presents the CD as a selectable option), I see: 11/25/11 4:39:31.159 PM System Preferences: isCDROM: 0 isDVDROM:1 11/25/11 4:39:31.159 PM System Preferences: mountable disk appeared: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.214 PM System Preferences: - So far so good, passing disk to System Searcher. 11/25/11 4:39:33.218 PM System Preferences: OSXCheck: No boot.efi in System Folder or volume root. 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD I'm at a loss here. I've done my research, but it sounds like most of the rEFIt errors of this nature are caused by installing from a thumbdrive, or an external drive. I'm using the internal SuperDrive. Also, I've tried this with two different disks: A Windows XP SP2 CD A Windows 7 x86 DVD Both are disks I've had around for years, and I've used them reliably in the past. The system is an early 2011 15" Macbook Pro, all firmware updates installed.

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • Windows 7 - Intermittently processes will not close when the app closes

    - by Bill Sambrone
    I have a user I am supporting who has the strangest issue. There are 2 problem applications, Word 2010 and a scanning program called ScandallPro. Intermittently (and at least once a day), she will close an app and the underlying process will not close. Both Word 2010 and this scanning software have all the latest updates. There is another user who has the same software that does not have this problem, and has identical hardware. I have formatted and rebuilt the computer for the user who is having the problems. After the rebuild, the machine was fine for a day but the scanning software continues to intermittently keep the process running even after it is closed. This is a problem because she cannot open a new instance of it while the process is still running. There is a boatload of line of business software on this machine, all of which she needs. I believe the Word 2010 issue is due to a misbehaving add-in (there are 2 add-ins, neither of which seem stable), and I think my best bet is to work with the add-in vendor on it. The scanning program staying open is isolated to this 1 user. The only difference between her machine and the other user is that she has Quickbooks, RoboForm, and Adobe Acrobat X Pro. Any ideas of what can be causing this, or other diagnostic steps to try?

    Read the article

  • Pivot table not refreshing sort order

    - by William Anthony
    I have Pivot Table that get its data source from another sheet, same workbook. I want the sort order of data is same as the data source order, I choose "Sort in data source order" in Pivot Table option. The problem is, when I change the data order on data source worksheet, then I refresh the Pivot Table, the sort order didn't change. I googled that the Pivot Table should be unlink first then re-link again in order to work properly, so I tried the following: The original data source has named range: origdata. The fake data source has names range: dummydata I changed manually data source to dummydata then changed back to origdata. The sort order did change as expected. Now I want to make the operation automated, so I'm using this code in Worksheet.activate event. Note that, PT is PivotTable instance. ... PT.SourceData = "dummydata" PT.RefreshTable PT.SourceData = "origdata" PT.RefreshTable ... Change data source from VBA didn't change the sort order just like I did with manual method. Why is that? Am I missing something? Maybe there are some routine called when I changed the data source manually via toolbar button? Thanks in advance for your help.

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Have it fixed or buy a new one?

    - by Workshop Alex
    My dual-monitor system has just become a single-monitor system again when the older monitor decided it would be nice to just turn to black. It's a Samsung LCD monitor and is over three years old. Not sure if the warranty is still valid but I just wonder what option would me more efficient: 1) Have the monitor fixed for a small amount. 2) Buy a new monitor for a slightly bigger amount. When monitors were still expensive, I wouldn't doubt about this and would just have my monitor repaired. But prices are so low nowadays, (and repairs are expensive) that I wonder if it's worth the trouble... Of course, I'm in no hurry since I still have another monitor. It's just that I liked the dual-monitor setup. Solved! Just ordered a new monitor. A Samsumg Syncmaster T260HD 25,5". Much more than it would cost me if I just had my old one repaired but I noticed that this one has a build-in TV tuner, plus speakers. It's way more expensive than a repair, but it's worth the additional value it provides.

    Read the article

  • Why does my exchange message filtering rule not work?

    - by Jon Cage
    I have two rules set up to sort incoming bug reports. The first is specific to a single device: Apply this rule after the message arrives sent to SMS Distribution and with <source_device_number>: in the body move it to the BugReports\<source_device_number> folder ..and the second is a catch-all for everything else: Apply this rule after the message arrives sent to SMS Distribution move it to the BugReports folder For some reason though, the first rule never seems to act even though it's higher in the list. So for some reason an email like the following doesn't seem to get caught by the first rule: From: <SourceDeviceUID> To: SMS Distributor Subject: Message from <SourceDeviceUID> Message: <source_device_number>: Device encountered a problem. Details below... ...where <source_device_number> is an integer. The second rule works fine. But for some high-priority devices, I want them automatically sorted. Why might that first rule fail?

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Could local ISP capture my location whenever i launch a VPN to a VPN server?

    - by Ozgun Sunal
    I am extremely concerned that my ISP collects any information once I am connected to a VPN server. For instance, as far as I know, when I start a connection to a HotSpotShield VPN server, an IP address is assigned to me just before a successful connection. Besides, I'll be having an extra IP address at the beginning with the TAP Adapter. An encryption tunnel is set up between me and the VPN server. Whenever my request for a website reaches them (VPN server), they decrypt the data and later they encrypt the reply which returns from the web (targeted) server. This works like that. So, the ISP can not see what I am watching, displaying and writing because the connection is encrypted. But, the targeted websites see and record all actions. Still, they can not identify my real IP address. I'm really concerned about if the ISP can see "my location". OK, it has an IP address from another country as my real IP address, but how does my ISP detect the traffic going through them? Can they find out who I am? Won't they say "Hey, there is a traffic but who is and what he is doing right now?", because I get the Internet from them?

    Read the article

  • Netbook (Samsung N220) on Ubuntu 10.04 slows down WiFi for other computers

    - by Joachim
    I encountered a really odd problem with my new netbook. I am running Ubuntu 10.04 on a Samsung N220 Mito. So far everything worked fine. Now I tried the machine for the first time in our work group where we have a wifi (with internet access) for all laptops. The wifi is controlled by a computer running Suse 9.3 which provides a DHCP server and imposes a firewall. At the moment there is only a macbook in the wifi, where no problems with the internet or wifi connection are encountered. Now coming to my actual problem: In addition to the macbook i connect the Samsung N220 to the Wifi. Problem: My download speed is for some reason limited to 70KB/s max. This is neither a limitation of the server/website i browse on, nor a configuration of the netbook: at home i have 500KB/s download speeds. Furthermore, it is not a default limitation for "untrusted" or "new machines" in the wifi, as for instance other new laptops get full speed internet with our wifi. Problem: Once the Samsung N220 is generating traffic in the wifi, the wifi is slowed down dramatically for all other machines: I run a ping to the router from the macbook. The ping times with the N220 ideling are 2-6ms. When I start downloading or browsing in the web with the N220 the ping speed drops to 800ms. Vice versa, when the macbook is generating the traffic the ping of the N220 to the rooter stays constant at around 2-6ms. So clearly, it is some problem originating from my netbook or maybe its treatment in the wifi. Thanks for any help

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Ubuntu 12.04/12.10 can't detect windows or any other partitions(Asus z77 UEFI BIOS)

    - by user971155
    I've recently completed tinkering my new pc(motherboard ASUS z77 with UEFI BIOS) and unfortunately not everything works quite well. After installing windows 7 ultimate on a single primary partition(SATA drive) I decided to allocate one more logical partition for additional needs. When I tried doing it with the manager - it said that it couldn't allocate requested size even though I certainly asked for much less than it was available. I thought that it might have been a windows issue and proceded to installing Ubuntu 12.10 x64. When the graphical interface loaded it showed me a message stating that it can't find any other operating system on the drive. When I used custom partioning option it showed me none of my current partions(including that with windows). However, when I boot with "Try Ubuntu" feature it does find them ! I find it weird though. Here's what the console present me with: ubuntu@ubuntu:~$ sudo os-prober /dev/sda1:Windows 7 (loader):Windows:chain ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00072b98 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 100020223 49906688 7 HPFS/NTFS/exFAT /dev/sda3 100022270 1250263039 575120385 5 Extended /dev/sda4 566669312 1250263039 341796864 83 Linux I also tried creating partitions from disk utility which results in error: , Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/sda, start=51211402240, size=1923000000, type=0x83 Entering MS-DOS parser (offset=0, size=640135028736) MSDOS_MAGIC found looking at part 0 (offset 1048576, size 104857600, type 0x07) new part entry looking at part 1 (offset 105906176, size 51104448512, type 0x07) new part entry looking at part 2 (offset 51211402240, size 588923274240, type 0x05) Entering MS-DOS extended parser (offset=51211402240, size=588923274240) readfrom = 51211402240 MSDOS_MAGIC found Exiting MS-DOS extended parser looking at part 3 (offset 290134687744, size 349999988736, type 0x83) new part entry Exiting MS-DOS parser MSDOS partition table detected containing partition table scheme = 1 got it Error: Can't have overlapping partitions. ped_disk_new() failed Here's what I get when I try to install the system i.stack.imgur.com/pjlb9.png, i.stack.imgur.com/g1lXN.png P.S. It's strange that I even can't create any more partitions neither with disk-utility nor with windows 7 native tools

    Read the article

  • Daisy-chainable DisplayPort Monitors

    - by thepurplepixel
    I am the proud new owner of a MacBook Pro with a mini-DisplayPort. My desk setup used to allow me to position the screen of my old MacBook beside an external monitor, essentially allowing dual-head. It was also advantageous that my old MBP and my external monitor had the same resolution, 1440x900. Now, I'm searching for a set of two monitors that I can use a HengeDock with. Unfortunately, the MacBook Pro suffers from having only one mini-DisplayPort. Looking up the spec for DisplayPort 1.2 (which the MBP supports), DisplayPort daisy chaining is supported. What I'm looking for is a monitor that has two DisplayPorts so I can daisy chain two monitors off the single mini-DisplayPort. What I don't want is a USB-based video solution. I need full acceleration on both monitors; an external video card won't cut it. I hope I don't have to wait a few years for these monitors to come out. TL;DR: I need two monitors with two DisplayPorts each that I can daisy-chain. Thanks!

    Read the article

  • How do I calculate the cost of printing a given page?

    - by Alenanno
    I have seen questions like How much does a square inch of ink cost and How much more will a high-dpi image cost to print?, but mine isn't asking neither about a specific case, nor about how much something costs, as that would depend on the toner, for example. Rather, I was wondering how should I go about calculating the cost of printing a given page. Note that "given page" should be seen as a sort of x, i.e. the answer should be applicable in any case; I'd like this question to provide a good reference for those who want to calculate this cost. What should be taken into consideration? The cost of a single page (the paper only) is easily checkable, since you divide the cost of the whole package for the number of pages in the package itself. But how do I calculate the cost of the ink/toner? Which could translate to: how do I calculate the Ink Density1 for a given printer? I know it depends on quality of the printer itself, the type, the quality of the image being printed, the very nature of what I'm going to print, etc. But again, the focus of my question is not on the variables of this case, but rather the constants, hoping the math simile works for this case too. 1: Total amount of ink in one area of the page.

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • Why does Exchange 2003 silently reject emails with large attachments?

    - by Cypher
    Our environment: Exchange Server 2003 Standard, single instance, running on Windows Server 2003 Standard. configured to not send/receive mail with attachments larger than 10 MB. NDRs are not enabled. The issue: When an external sender sends an email with an attachment larger than 10MB, Exchange, as configured, does not receive the message. However, the sender of that message does not receive any notifications from his own mail server that the message could not be delivered due to attachment size. However, if an external user tries to send an email to a non-existent user, they do receive a message from their mail server indicating that the user does not exist. Why is that, and is there anything I can do about it? It would be nice if the sender received notification that the attachment file size exceeds our limits and their message was never received... Update The Exchange server has a SpamAssassin box in front of it... could that have something to do with it? Here is one of the last lines from SpamAssassin's logs when searching for my test e-mails: mail postfix/smtp[19133]: 2B80917758: to=, relay=10.0.0.8[10.0.0.8]:25, delay=4.3, delays=2.6/0/0/1.7, dsn=2.6.0, status=sent (250 2.6.0 Queued mail for delivery) My assumption is that Spam Assassin thinks the message is OK and is forwarding it off to Exchange. Update I've verified that Exchange is receiving the message and generating an NDR. However, delivery of NDRs are disabled to prevent Backscatter. Is there something that I can do to get Exchange to send a bounce message to the sending mail server (or verify that message is being sent) so the sending mail server can notify its sender of the bounce?

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • Hardware recommendations / parts list for a modern, quiet ZFS NAS box - 2011-Feb edition [closed]

    - by dandv
    I want to build some really reliable storage for my data, and it seems that ZFS is the only filesystem at the moment that does live checksumming. That rules out DroboPro, so I'm looking to building a quiet ZFS NAS that would start with 4 2TB or larger hard drives. I'd like this system to be very reliable and relatively future-proof for 2-3 years, so I'm willing to invest some $$$ and buy higher end components. I did see questions here and on other forums about low-cost servers, but I'm not looking for those. I'd be super happy to go for an off-the-shelf solution, but I haven't found one that's quiet. I started doing the research (summarized on my wiki), but I realized that it just gets too complicated for what I know as a software dude, and I'm entering the analysis paralysis area. At this point, I'm basically looking for a parts list for a configuration that will work (and is modern), and I know there are folks around here who are way more competent than me. I've built computers and am comfortable assembling one and messing with *nix; I can follow guides; I just want to end the decision process for the hardware and software configuration. What I've researched so far (not that these are very modern components): Case: I think I've settled on the Antec Twelve Hundred case because it cools well, is quiet, and simply has 12 bays that allow elastic mounting. The SilverStone Raven is its counter-candidate, but I find its construction quite odd. For the PSU, I'm torn between Antec CP-850 and Nexus RX-8500, but I did this research more than a year ago. The Nexus has a very uniform power profile, and I'd rather not have the Antec spin up and down based on load. On the other hand, I'm not sure how often my file server will draw more than 400W under use. For the hard drives, I've read that WD Black drives are actually WD RE3 with a software setting changed. I'd also like to buy different drive types, not just 4 WDs. Recommendations? Right now I have a 2TB Hitachi Deskstar 7K300. For the motherboard, CPU and RAM I have no idea, other than the RAM must be ECC. I already asked a question here about ECC RAM, but I was misguided and was looking for a motherboard that would support USB 3.0 as well. I've learned to go with eSATA, or worry about USB later. Then there's the (liquid) cooling, Wi-Fi card, and FreeBSD vs. OpenSolaris Express. Lastly, I'm wondering if I can make this PC into a media server by adding a Blu-ray drive and a good sound card. But support for Blu-Ray is spotty on Linux, and I don't know if Windows 7 on VirtualBox would get sufficient hardware access to output HDMI or SPDIFF signals. (Running OpenSolaris virtualized is not an option because of the reliability risk.) Then there are HDCP concerns. Suggestions on that would be appreciated as well, but I don't want us to get sidetracked. A specific shopping list on the core components would be great, so I can start ordering, and in the meantime educate myself with regards to the other issues. Finally, I think this could become a great FAQ for those technically inclined to build their own ZFS server, but confused by the dizzying array of options out there, and I promise to compile the results and share my experience building and benchmarking the server.

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he want be able to access all cPanel features, but that's ok. My only question is: "is there a limit on the number of cPanel addon domain that can be added?". I know in my account I'm allowed to add unlimited addon domains, but is there something that might happen that slows down cPanel or could create problems? I mean is there any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that Awstast could run very slow or crash", or "be aware that mails from different addon domains under the same cp account could create many problems.", etc.) Many thanks!

    Read the article

< Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >