Search Results

Search found 27973 results on 1119 pages for 'power point vba'.

Page 321/1119 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Apache fails to start after WHM easyapache update

    - by Vigrond
    Tryin to get some light shed on this issue Running CentOS I upgraded Apache using easyapache to 2.2 All was well I then used WHM to update Mysql to 5.5 This succeeded but now Apache will not start. The error log was reporting things like [Sun Apr 15 00:44:57 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec) [Sun Apr 15 02:27:30 2012] [warn] pid file /usr/local/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [alert] getpwuid: couldn't determine user name from uid 4294967295, you probably need to modify the User directive [Sun Apr 15 02:27:30 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 configured -- resuming normal operations [Sun Apr 15 02:27:30 2012] [alert] Child 4063 returned a Fatal error... Apache is exiting! So I tried to recompile using easyapache again, but easyapache just hangs I tried with base php settings - and it always gets stuck on "bf804000-bf819000 rw-p 7ffffffe9000 00:00 0 [stack]" At this point in cpanel the status says "create srm.conf and access.conf for mod_frontpage" I have tried things like rpm --rebuilddb yum clean all yum update with no luck. I'm kind of running out of ideas, and wondering if anyone could point me to the right direction.

    Read the article

  • Ubuntu Wubi "drive" failure; mount drive in XP?

    - by 618034
    Hi there, I installed the Wubi distribution of Ubuntu on a separate partition (which is silly, since why do I care if Windows can still manage the partition?) a few months back; it was pretty awesome, until Linux hosed. At this point, I can get Ubuntu to boot if I try really hard through grub, but once it does start, the screen is hosed, so no dice. At this point, I'd like to wipe it all and start over, but I need to get some stuff off the "disk". The Wubi install makes this difficult, since the "disk" is a flat file on an NTFS partition. I've done just about everything I can think of — I renamed the virtual disk .iso, mounted it with VirtualCloneDrive, then used whatever magic EXT3 (EXT4?) readers I could dig up on the Internet to parse the mount — but nothing's working. Can you offer any suggestions? The "disk" is currently in D:\ubuntu\disks\root.iso. Many thanks! (I may be high-latency at the moment, apologies if I don't address follow-ups quickly)

    Read the article

  • Google Apps Sync bloated PST file to 14GB

    - by James S
    Back story: I have Outlook connected to my Google Apps email and noticed that some mail never got migrated from my original PST file. I found some VBA code online that compares mail in different PST folders, modified it to find missing and copy those to the target folder. I ran it folder by folder and moved missing mail. Before the exercise the Google Apps PST was about ~4GB and after it was ~4.7GB. Problem: I left Outlook open so Google Sync can copy it online. 24 hours later the Google Apps PST file bloated to 14GB+ and none of the mail has been synced to the cloud. I know that there should be at most ~5GB of mail. Why is the rest of the space being taken up? Funny thing is Gmail shows 3GB as being used online. What I tried: I emptied the deleted items folder and recycling bin I've run Outlook compact PST and it didn't work. I tried SCANPST.exe on the PST and it didn't work. I re-ran compact PST and it didn't work (after SCANPST found and fixed a few errors) Any ideas out there on what caused the problem and how to solve it?

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Understanding NFS4 exports and pseudofilesystem

    - by Trevor Harrison
    I think I understand the way pre-NFS4 exports work, specifically the namespace of the exported point. (ie. export /mnt/blah on server, use mount server:/mnt/blah /my/mnt/point on client) However, I'm having a hard time wrapping my head around NFS4 exports. What I've been able to gather so far is that you export a 'root' by marking it with fsid=0, which you then import on the client side by referring to it as '/'. (ie. exportfs -o fsid=0 /mnt/blah on server, mount server:/ on client) However, after that, it gets a little weird. From my playing around, it seems I can't export anything else thats not under /mnt/blah. For example, exportfs /home/user1 fails when trying to mount from the client unless /mnt/blah/home/user1 exists on the server. If this is the case, what is the difference between exportfs /mnt/blah/subdir1 on server and mount server:/subdir1 on client and just skipping the exportfs and mounting whatever subdir of /mnt/blah you want? Why would you need to export anything other than the root? Its all in the same namespace anyway.

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • Wireless to Wireless Transfer Slow on a Linksys WRT54GL

    - by Kyle Brandt
    The Situation: When I try to transfer a file from one computer to another that are both connected via wireless on a WRT54GL (in a office) with dd-wrt firmware I often get bad speeds. In generally they average around 100 kilobytes a second. Either computer can download via wireless from the Internet at at about 2 megabytes a second. The speed is slow with the transfer of one large file. There are about 20 other wireless networks that the computers can see, so there is a lot of noise, but I don't have the equipment to really monitor the frequencies well. But that still seems pretty slow. I thought maybe it was the transmit on each card, but even when they are 5 feet away with a line of sight I still get these speeds. According to Linux both cards are operating at 54g. My Questions: Is this normal for this sort of consumer level wireless equipment? Anything I can do to improve it? why is wireless to wireless transfer slow when everything else isn't? Whats steps might I take to figure out what is happening? For example, are lots of packets not making to the access point requiring retransmissions? Above all, I want to find out what the problem actually is. This may seem odd, but at this point I am more interested in understanding what the problem is than fixing it. What I have tried: I have tried messing with lots of settings. Different channels, xmit power, G-Only, none of which has made anything any better. I've also tried upgrading to newer dd-wrt firmware version and doing a reset to wipe out the settings.

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • What could cause TFTP reloaded Cisco `running-config` on 871 to fail?

    - by xtian
    Cisco CCP Write Configuration borked my 871w config while I was trying to setup port forwarding. I went through the basic steps to reconfig the router. I looked to see if I could just reset the router. Nope. I tested the 871's flash memory with fsck to see if there was hardware failure. Nope. Then I rewrote the minimal config for TFTP (which is the same for Cisco's CCP app.). Thne, I successfully uploaded a previously working running-config from Win Vista using SolarWinds TFTP Server, unfortunately the restore was not entirely successful. The old running config was saved to the 871's startup-config and I can login using console port. Some other things that are working are the hostname and welcome message but that's about it. Startup shows an error SETUP: new interface NVI0 placed in "shutdown" state after tftp. The missing light on the access point modem for ethernet link show the 871'a outside FE4 is not working. SO...what's the possible problem with reloading a previously working config (approximately 4 months with the same config) via TFTP? Is there something I can look for on the 871 to verify the config? Or on Vista to validate the config file itself before I transfer it? Or, is this there a common TFTP issue? UPDATE. I missed the instruction from Cisco's TFTP page to delete aaa lines from the config (although a video from a SuperUser user didn't make this point in his most excellent demo). There were several lines of this sort, I deleted them and uploaded again. However, they're back. I assume they're added automatically? [nope, see answer] UPDATE 2. The reload of previous settings was successful, but this error remains. I don't even know now if it was there before or not. It seems irrelevant to the question.

    Read the article

  • How can i use the `eject` command on a computer i have SSH'd into?

    - by will
    So if i do eject on my machine, it works exactly as expected, however, if i ssh into the machine next to me, and do the same thing, it does not work... my computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: trying to eject `/dev/sr0' using CD-ROM eject command eject: CD-ROM eject command succeeded other computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: unable to open `/dev/sr0' if i look in the /dev/ dir, then i find cdrom which is a symlink to sr0 - as mentioned by the verbose outputs of eject -v. On my machine, if i try and look at it, if the drive is open, it will close it, and then give this: $ less sr0 sr0 is not a regular file (use -f to see it) so $ less -f sr0 sr0: No medium found but if i do it on the other computer, $ less -f sr0 sr0: Permission denied so i look at the files more, and get this on both machines: $ ls -la sr0 brw-rw----+ 1 root cdrom 11, 0 Nov 12 10:13 sr0 Does anyone know a way around this? I do not have root access.

    Read the article

  • How to Programmatically Split and Manipulate Rows of Data From Excel

    - by Charlene
    I am hoping one of you will be able to help get me started on this issue. I need to create some sort of macro or VBA code to split and manipulate rows of data in Excel. For this example, we have 5 rows of data. The first 3 rows are item information for Order # 0000000000-00 and the last 2 rows are item information for order # 0000000000-01. I need one row ("HDR") for each order number, and one row ("ITM") for each product per order. I have included an example below showing the data I will receive and the desired outcome. Raw Data: order-id product-num date buyer-name product-name quantity-purchased 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 Any and all help would be much appreciated!!! Thank you.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • OpenVPN IPV6 Tunnel Radvd

    - by Arenstar
    Hello.. I have an interesting question regarding ipv6 + openvpn.. My Version is OpenVPN 2.1.1 i have been given a native /64 ipv6 network ( for this example 2001:acb:132:acb::/64 ) The plan was/is, route this block through openvpn and into an office ( for testing purposes ) Soo to explain.. I have a Centos Box as the first linux "router" in a datacenter & a Ubuntu box as the second linux "router" in the office I have created a simple point-to-point tunnel using tun ( based off ipv4 address to start the tunnel ) I have assigned to Centos /sbin/ip addr add fed1::1/128 dev eth0 /sbin/ip addr add fed2::2/128 dev tun0 /sbin/ip route add 2001:acb:132:acb::/64 dev tun0 ## ipv6 Block down the tunnel /sbin/ip route add ::/0 dev eth0 ## Default out to Gateway I have assigned to Ubuntu /sbin/ip addr add fed1::3/128 dev tun0 /sbin/ip addr add fed1::4/128 dev eth0 /sbin/ip route add 2001:acb:132:acb::/64 dev eth0 ## ipv6 Block down to eth0 /sbin/ip route add ::/0 dev tun0 ## Default up the tunnel I have also included on both servers.. sysctl -w net.inet6.ip6.forwarding=1 Looks Good... right??? Wrong.. :( I am not able to ping fed1::1 from fed1::4 (Ubuntu) (can ping :4,:3,:2) However, i can ping fed1::1 fed1::2 from :3 ?????? ( very strange ) I am able to access the internet from any ipv6 interface on the Centos Box but clearly not from the Ubuntu box.. Further, i will eventually run radvd on the Ubuntu box eth0, and autoconf the network with ipv6 address's Anyone with some advice / tips to help me out.. ??? Cheers

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • fedora, dhcpd fails to start

    - by soxs060389
    History: I got a tiny shiny plugserver which I want to plug to my ADSL router (or however you want to call it) on one end (eth0), and the other end (eth1) I want to run a dhcp server for my LAN. ATM I am stuck with getting LAN to work. OS is fedora 12. I configured my /etc/dhcp/dhcpd.conf like this: # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "unknown.org"; option domain-name-servers 192.168.44.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.44.255; default-lease-time 86400; max-lease-time 172800; subnet 192.168.44.0 netmask 255.255.255.0 { host fedorabigbox { hardware ethernet 00:19:66:8E:61:74; fixed-address 192.168.44.21; } #host mobile #{ # hardware ethernet ***; # fixed-address 192.168.44.22; #} range 192.168.44.100 192.168.44.110; option routers 192.168.44.1; } # this is just dummy, as read many howtos, some suggesting to add a subnet blah netmask blah for each interface subnet 192.168.33.0 netmask 255.255.255.0 { range 192.168.33.100 192.168.33.110; option routers 192.168.33.1; } But the server fails to start when trying to start it via /etc/init.d/dhcpd start In general it would be nice if someone can point me to a in detail explanation of how network works, I am pretty new to this stuff. More concrete question: How to point the subnets to eth1 and the other to eth0, how can this be achieved? Does someone see any errors or flaws? Syntax should be correct, allready checked that with the dhcpd syntax check. Thanks for any help

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • Cisco 678 Will Not Work using PPPoE - Possibly Because I Configured it Incorrectly..?

    - by Brian Stinar
    I am attempting to configure a Cisco 678 because I am totally sick on my Actiontec. However, I am running into some problems. It seems as though the Cisco is able to train the line, but I am unable to ping out. I am all right at programming, but still learning a lot when it comes to being a system administrator. I apologize in advance if I did something ridiculous, or am attempting to configure this device to do something it was not designed to do. It is almost like I am not correctly configuring the device to grab it's IP using PPPoA (like my Actiontec.) The output from "show running" (below) makes me think this too. Below are the commands I ran in order to configure this: # en # set nvram erase # write # reboot # en # set nat enable # set dhcp server enable # set PPP wan0-0 ipcp 0.0.0.0 # set ppp wan0-0 dns 0.0.0.0 # set PPP wan0-0 login xxxxx // My actual login # set PPP wan0-0 password yyyyy // My actual password # set PPP restart enabled # set int wan0-0 close # set int wan0-0 vpi 0 # set int wan0-0 vci 32 # set int wan0-0 open # write # reboot Here is the output from a few commands I thought could provide some useful information: cbos#ping 74.125.224.113 Sending 1 8 byte ping(s) to 74.125.224.113 every 2 second(s) Request timed out cbos#show version Cisco Broadband Operating System CBOS (tm) 678 Software (C678-I-M), Version v2.4.9 - Release Software Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Nov 17 2004 15:26:29 DMT FULL firmware version G96 NVRAM image at 0x1030f000 cbos#show errors - Current Error Messages - ## Ticks Module Level Message 0 000:00:00:00 PPP Info IPCP Open Event on wan0-0 1 000:00:00:14 ATM Info Wan0 Up 2 000:00:00:14 PPP Info PPP Up Event on wan0-0 3 000:00:01:54 PPP Info PPP Down Event on wan0-0 Total Number of Error Messages: 4 cbos#show interface wan0 wan0 ADSL Physical Port Line Trained Actual Configuration: Overhead Framing: 3 Trellis Coding: Enabled Standard Compliance: T1.413 Downstream Data Rate: 1184 Kbps Upstream Data Rate: 928 Kbps Interleave S Downstream: 4 Interleave D Downstream: 16 Interleave R Downstream: 16 Interleave S Upstream: 4 Interleave D Upstream: 8 Interleave R Upstream: 16 Modem Microcode: G96 DSP version: 0 Operating State: Showtime/Data Mode Configured: Echo Cancellation: Disabled Overhead Framing: 3 Coding Gain: Auto TX Power Attenuation: 0dB Trellis Coding: Enabled Bit Swapping: Disabled Standard Compliance: T1.413 Remote Standard Compliance: T1.413 Tx Start Bin: 0x6 Tx End Bin: 0x1f Data Interface: Utopia L1 Status: Local SNR Margin: 19.0dB Local Coding Gain: 7.5dB Local Transmit Power: 12.5dB Local Attenuation: 46.0dB Remote Attenuation: 31.0dB Local Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 2 No Cell Delineation Interleaved: 0 Out of Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 Remote Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 1 No Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 cbos#show int wan0-0 WAN0-0 ATM Logical Port PVC (VPI 0, VCI 32) is configured. ScalaRate set to Auto AAL 5 UBR Traffic IP Port Enabled cbos#show running Warning: traffic may pause while NVRAM is being accessed [[ CBOS = Section Start ]] NSOS MD5 Enable Password = XXXX NSOS MD5 Root Password = XXXX NSOS MD5 Commander Password = XXXX [[ PPP Device Driver = Section Start ]] PPP Port User Name = 00, "XXXX" PPP Port User Password = 00, XXXX PPP Port Option = 00, IPCP,IP Address,3,Auto,Negotiation Not Required,Negotiable ,IP,0.0.0.0 PPP Port Option = 00, IPCP,Primary DNS Server,129,Auto,Negotiation Not Required, Negotiable,IP,0.0.0.0 PPP Port Option = 00, IPCP,Secondary DNS Server,131,Auto,Negotiation Not Require d,Negotiable,IP,0.0.0.0 [[ ATM WAN Device Driver = Section Start ]] ATM WAN Virtual Connection Parms = 00, 0, 32, 0 [[ DHCP = Section Start ]] DHCP Server = enabled [[ IP Routing = Section Start ]] IP NAT = enabled [[ WEB = Section Start ]] WEB = enabled cbos# wtf...? Thank you all very much for taking the time to read this, and the help.

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >