Search Results

Search found 95574 results on 3823 pages for 'mac osx server'.

Page 1940/3823 | < Previous Page | 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947  | Next Page >

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Outlook 2007 - Mailbox doesn't show my Items (like Calendar)

    - by cyntaxx
    Hi All, I have running an Exchange Server 2003 and 2 IBM Laptops (A&B)with XP SP3. On both Laptops Office 2007 is installed. Laptop "A" Outlook doesn't show me my Calendar and Notes entries in my mailbox tree on Laptop "A". I can click on the calenendar tab and the entries are there. On Laptop "B" it is working fine. I know that I can make a "rigth click" on "mailbox" and choose "create new folder". Than I select i.g. my calendar. It creates it, but I can't access it through my mailbox tree. Clicking on the Calendar tab works fine again. So, the mailbox is fine (I think). There must be failure with Outlook. I tried these commands here with no positive result. Outlook /cleanviews Outlook /resetfolders I want to avoid a repair installation of Outlook, because the whole office needs to be repaired. (And both laptops belong to my boss) :) Thanks, Toby

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • mdadm: Replacing array with entirely new drives

    - by hellfur
    I have a server with three 500GB drives, with most of my data in a RAID5 configuration spanning the three of them. I just purchased and installed four 1TB drives, and the intention is to move off of the old drives and onto the new ones. I have enough SATA ports and power connectors to power all seven of my drives at once, so I've kept the old RAID running while I figure out what to do with the new drives. My question is: Should I create a whole new array on the 1TB drives, then move everything over and reconfigure linux to boot from the new md arrays? Or should I just expand the array, swapping out each of the three 500GBs with the 1TB, then adding the final drive? I've read up on the mdadm extending drive setup, and it makes sense, but I imagine I would use one of the drives as a full backup while I move things over, then add that drive back into the array once things are up and running on three of the 1TB drives, so there's some complication in going that route as well... I'm just not sure which is safer/recommended.

    Read the article

  • apache/httpd responds slower under EL6.1 than EL5.6 (centos)

    - by daniel
    I've read through other threads on performance differences between RHEL6 and RHEL5, but none seem a tight match to mine. My issue manifests itself in slightly slower average response time (20ms) per request. I have about 10/10 servers of the same hardware spec with Cent6.1 and Cent5.6. The issue is consistent across the group. I am running Ruby on Rails with Passenger. Apache config is identical (checked out from the same SVN repo) Ruby and Passenger are identical builds. Application is identical and being served traffic round robin. mod_worker An interesting clue from server-status: The Cent6.1 servers have a steady 20-40 threads in the "Reading Request" state while the Cent5.6 servers have around 1. I'm graphing this so I can see it trend over time. I also have a bunch of much newer machines that are significantly faster and are running Cent6.1. They dust all the older machines in response time, but I can see they also have a steady 20-40 threads in the "Reading Request" state. This makes me believe I can get their response time down, if I can figure out what is holding up these requests. My gut is telling me that I need to tune some network setting in sysctl, but I haven't figured it out yet. Help is appreciated.

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • iptables port redirection on Ubuntu

    - by Xi.
    I have an apache server running on 8100. When open http://localhost:8100 in browser we will see the site running correctly. Now I would like to direct all request on 80 to 8100 so that the site can be accessed without the port number. I am not familiar with iptables so I searched for solutions online. This is one of the methods that I have tried: user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT user@ubuntu:~$ sudo iptables -A INPUT -p tcp --dport 8100 -j ACCEPT user@ubuntu:~$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8100 It's not working. The site works on 8100 but it's not on 80. If print out the rules using "iptables -t nat -L -n -v", this is what I see: user@ubuntu:~$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination 0 0 REDIRECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8100 Chain INPUT (policy ACCEPT 14 packets, 2142 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 177 packets, 13171 bytes) pkts bytes target prot opt in out source destination The OS is a Ubuntu on a VMware. I thought this should be a simple task but I have been working on it for hours without success. :( What am I missing?

    Read the article

  • Network outage caused by SMC8013WG Cable Modem/Router ?

    - by mkocubinski
    At work, we have a basic Class C Network. The gateway/router is a SMC8013WG (stock comcast commercial cable modem), and simple unmanaged switch (HP Pro Curve 1400 24G). The SMC8013WG is our default gateway as well as DHCP server. Periodically, I'd say almost every other day.. the entire network will just stop responding. I won't be able to ping/see the gateway, any computers on our local network, or anything on the internet. The only way to fix this is to unplug the Comcast cable modem, wait, and plug back in. This unfailingly fixes the problem. But this doesn't make much sense to me.. shouldn't the network still be fine locally, since everything is plugged into the switch anyway? Why would resetting the router fix this? Can anyone suggest anything to check to in order to narrow this problem down? Just to be clear.. here is the basic topology: { Internet } -- (12.345.67.89) Comcast Cable Modem (192.168.1.1) -- Switch -- 192.168.1.2-254 P.S. Our IT guy is in about 3 hours a day every other week or so, so.. we're kind of on our own most of the time.

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Plesk command working in manual script, not in cronjob

    - by dsaunier
    Hi, In order to install a hosting plan, I use Plesk's commands in SSH as specified in their official guide. When typed directly in SSH (Putty), it works perfectly. The line is as follows with obviously values hard coded when in CLI: /usr/local/psa/bin/domain --create '.$url.' -owner mynamehere -ip '.IP_SERVER_PLESK.' -status enabled -hosting true -hst_type phys -login '.$ftp_user.' -passwd '.$ftp_pw.' -www false -php true -php_safe_mode false -hard_quota 100M I then put that request in a php script that does other things after hosting is installed. Now for the weird part: when calling that script from CLI it also works fine, I do a ./myscript.php and it installs the hosting, then sends emails etc. However after I create a cronjob to have that same script called regularly, then the Plesk command fails. The cronjob is started in Plesk as */15 * * * * /usr/bin/php /home/scripts/myscript.php and it works fine for everything BUT the Plesk hosting install, that returns "Unable to read Control Panel configuration file" and therefore does not install the domain hosting. Still this is the same script that I call manually ! On that server are the PHP used to call a cronjob and the one used in CLI different ? What do I miss, help greatly appreciated ! Regards.

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • How to bind a label inside a gridview to another table?

    - by Kolten
    I have a very standard Gridview, with Edit and Delete buttons auto-generated. It is bound to a tableadapter which is linked to my "RelationshipTypes" table. dbo.RelationshipTypes: ID, Name, OriginConfigTypeID, DestinationConfigTypeID I wish to use a label that will pull the name from the ConfigTypes table, using the "OriginConfigTypeID" and "DestinationTypeID" as the link. dbo.ConfigTypes: ID, Name My problem is, I can't automatically generate Edit and Delete buttons using an Inner Join in my dataset. Or can I? FOllowing is my code: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" AutoGenerateDeleteButton="True" AutoGenerateEditButton="True" CssClass="TableList" DataKeyNames="ID" DataSourceID="dsRelationShipTypes1"> <Columns> <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False" ReadOnly="True" SortExpression="ID" Visible=False/> <asp:TemplateField HeaderText="Origin" SortExpression="OriginCIType_ID"> <EditItemTemplate> &nbsp;<asp:DropDownList Enabled=true ID="DropDownList2" runat="server" DataSourceID="dsCIType1" DataTextField="Name" DataValueField="ID" SelectedValue='<%# Bind("OriginCIType_ID") %>'> </asp:DropDownList> </EditItemTemplate> <ItemTemplate> &nbsp; <asp:Label ID="Label2" runat="server" Text='<%# Bind("OriginCIType_ID") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Name" SortExpression="Name"> <EditItemTemplate> <asp:TextBox ID="TextBox3" runat="server" Text='<%# Bind("Name") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label3" runat="server" Text='<%# Bind("Name") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Destination" SortExpression="DestinationCIType_ID"> <EditItemTemplate> <asp:DropDownList ID="DropDownList3" runat="server" DataSourceID="dsCIType1" DataTextField="Name" DataValueField="ID" SelectedValue='<%# Bind("DestinationCIType_ID") %>'> </asp:DropDownList> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("DestinationCIType_ID") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> So I did try to create my own edit and delete buttons, but kept receiving the error "cannot find update method" or something similar. Do I have to manually code the delete and update methods in my code-behind?

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • Am I obliged to use ipv6 tunnel services if I want to be able to use it?

    - by Zagorax
    I was looking for configuring Slackware to use ipv6 but all instruction I found speak about using an ipv6 tunnel that encapsulate ipv6 request into ipv4 packet and send them to an external router that extracts ipv6 request and sends a reply (or, at least, this is what I understood). Is that necessary? Isn't there a way to configure a pure ipv6 system? If yes, could you please point me to a guide that clearly explain how to enable ipv6 without this trick? I would like to configure my Slackware desktop at first, and then do the same with my Centos server. EDIT: maybe I gave you too few information. Sorry. I'll write some more information thanks to the posted guide. ~$ test -f /proc/net/if_inet6 && echo "Running kernel is IPv6 ready" Running kernel is IPv6 ready So, it seems ipv6 is enabled in my kernel. Some other output from ifconfig, route and /etc/resolv.conf content (with opendns): ~$ /sbin/ifconfig wlan0 | grep inet6 inet6 addr: fe80::21f:3bff:fe60:cc5b/64 Scope:Link ~$ /sbin/route -A inet6 | grep wlan0 fe80::/64 :: U 256 0 0 wlan0 ff00::/8 :: U 256 0 0 wlan0 ~$ cat /etc/resolv.conf inet6 nameserver 2620:0:ccc::2 nameserver 208.67.222.222 nameserver 208.67.220.220 But still, with ping6 I can only ping localhost (::1). Everything else is unreachable. Normal ping works fine. That is why I was asking if I am obliged to use a tunnel.

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

  • how to remove location block from $uri in nginx configuration?

    - by Jason
    I have a rewrite in my ngix conf file that works properly except it seems to include the location block as part of the $uri variable. I only want the path after the location block. My current config code is: location /cargo { try_files $uri $uri/ /cargo/index.php?_REWRITE_COMMAND=$uri&args; } Using an example url of http://localhost/cargo/testpage the redirect works, however the value of the "_REWRITE_COMMAND" parameter received by my php file is "/cargo/testpage". I need to strip off the location block and just have "testpage" as the $uri I am pretty sure there is a regex syntax to split the $uri and assign it to a new variable using $1 $2 etc, but I can't find any example to do just a variable assignment using a regex that is not part of a rewrite statement. I've been looking and trying for hours and I just can't seem to get past this last step. I also know I could just strip this out on the application code, but the reason I want to try to fix it in the nginx conf is for compatibility reasons as it also runs on Apache. I also should say that I have figured out a really hacky way to do it, but it involves an "if" statement to check for file existance and the documentation specifically says not to do it that way. -- UPDATE: ANSWERED BY theuni: The regex goes in the location block definition. one note of caution is that php handler location needs to be ABOVE this location, otherwise you will get a server error because it goes into an infinite redirect loop location ~ ^/cargo/(.*) { try_files $1 /cargo/$1/ /cargo/index.php?_REWRITE_COMMAND=$1&args; }

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • phpMyadmin issue on cpanel

    - by user1149244
    I logged in into our cpanel to view phpMyAdmin but I can't open it. It give me this error message: Warning: session_write_close() [function.session-write-close]: write failed: No space left on device (28) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 42 Warning: session_write_close() [function.session-write-close]: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/cpanel/userhomes/cpanelphpmyadmin/sessions) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 42 Warning: Cannot modify header information - headers already sent by (output started at /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php:42) in /usr/local/cpanel/base/3rdparty/phpMyAdmin/index.php on line 99 I contacted the support to assist me with. They just told me that the server is unmanaged. They told me to delete the free the partition but I don't know how to do it. I'm noob on that matter. The /var/log/btmp is over 8GB as they said. I want to delete that file. How would I do this? I need to free the space of the partition. I need some steps on how to do it and also steps on how to delete the /var/log/btmp. I have been googling around but I haven't found anything.

    Read the article

  • Centos 5.5 install PearDB

    - by John Gardeniers
    Disclaimer: I use Linux for some jobs but I am not a Linux admin. I have a Centos 5.4 machine which performs some server duties and doubles as a web site development machine. PHP 5.3.3 was installed from RPM with the --without-pear option. I now wish to use PearDB but can't figure out how to install it. If I run yum install php-pear-db, it comes back with Error: Missing Dependency: php = 5.1.6-27.el5_5.3 is needed by package php-devel-5.1.6-27.el5_5.3.i386 (updates). The only RPM I've found that looks like it might be close currently has a dead link, so I can't even try that. What would be the best way to go about this? Is there a way to reinstall from the RPM and include pear? Can I install the dependency without breaking the current installation? Should I try to uninstall the original PHP and reinstall it from source, complete with pear? I thought this might have been an SU question but the FAQ over there suggests otherwise.

    Read the article

  • Windows 7 system freezes: would like to know if they could be related to MrxSmb, Event ID 8003 errors

    - by lifegoeson
    First, this question centers around a home network. Is it okay to ask here? Or should I go to SuperUser? (I see less answers over there, but I'll go there if that would be more appropriate.) Network setup: 1 Machine running XP Pro 1 Machine running Win7 Ultimate Comcast router Linksys WRT610N Wireless router The Win7 machine goes into a total, unrecoverable system freeze frequently. I was tearing out my hair trying to ascertain a cause, but I noticed that it usually seems to correspond with performing operations on the shared folders on the XP machine. The last 2 occasions that the Win7 machine froze, I saw this entry for Event ID 8003 from source MrxSmb in the Event log of the XP machine: The master browser has received a server announcement from the computer WIN7_COMPUTER that believes that it is the master browser for the domain on transport NetBT_Tcpip_{320B32A7-FED9. The master browser is stopping or an election is being forced. My question is twofold: Could this cause a Win7 system freeze? If so, what could I configure differently on my network to stop these conflicts over who is the master browser? Thank you for your help!

    Read the article

  • How many iptables block rules is too many

    - by mhost
    We have a server with a Quad-Core AMD Opteron Processor 2378. It acts as our firewall for several servers. I've been asked to block all IPs from China. In a separate network, we have some small VPS machines (256MB and 512MB). I've been asked to block china on those VPS's as well. I've looked online and found lists which requires 4500 block rules. My question is will putting in all 4500 rules be a problem? I know iptables can handle far more rules than that, what I am concerned about is since these are blocks that I don't want to have access to any port, I need to put these rules before any allow. This means all legitimate traffic needs to be compared to all those rules before getting through. Will the traffic be noticeably slower after implementing this? Will those small VPS's be able to handle processing that many rules for every new packet (I'll put an established allow before the blocks)? My question is not How many rules can iptables support?, its about the effect that these rules will have on load and speed. Thanks.

    Read the article

  • mysqld - master to slave replication using rsync innodb, sequence number issues

    - by Luis
    I've read several of the related topics posted here, but I have not been able to avoid this innodb error. The steps I've taken to replicate data from a Slackware server - 5.5.27-log (S) to a FreeBSD slave - 5.5.21-log (F) were these: (S) flush tables with read lock; (S) in another terminal show master status; (S) stop mysqld via command line in third terminal; (F) while both servers are stopped, rsync mysql datadir from (S), excluding master.info, mysql-bin and relay-* files; (F) start mysqld (skip-slave) 121018 12:03:29 InnoDB: Error: page 7 log sequence number 456388912904 InnoDB: is in the future! Current system log sequence number 453905468629. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. This kind of error happens for a lot of tables. I know I can use dump, but the database is large, ca. 70GB and the systems are slow (old), so would like to get this replication to work with data transfer. What should I try to solve this issue?

    Read the article

  • Linux: Three default gateways?

    - by Daniel
    My server has three default gateways, how can that be? Shouldn't there be one default gw? I have three NICs, each attached to a separate subnet: server1:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.0.0 * 255.255.255.224 U 0 0 0 eth3 localnet * 255.255.255.224 U 0 0 0 eth0 192.168.8.0 * 255.255.255.192 U 0 0 0 eth1 default 10.5.0.1 0.0.0.0 UG 0 0 0 eth3 default 192.168.8.1 0.0.0.0 UG 0 0 0 eth1 default 10.1.0.1 0.0.0.0 UG 0 0 0 eth0 Sometimes, I can't ping a host on the Internet, sometimes I can. What I want is traffic to the Internet (0.0.0.0) routed through a specific NIC. Can I just add a route for 0.0.0.0 and default gw to one of the eth0-3 interfaces? Will it break my connection? I'm using Debian, here is my /etc/network/interfaces: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 iface eth0 inet static address 10.1.0.4 netmask 255.255.255.224 network 10.1.0.0 broadcast 10.1.0.31 gateway 10.1.0.1 allow-hotplug eth1 iface eth1 inet static address 192.168.8.4 netmask 255.255.255.192 network 192.168.8.0 broadcast 192.168.8.63 gateway 192.168.8.1 allow-hotplug eth3 iface eth3 inet static address 10.5.0.4 netmask 255.255.255.224 network 10.5.0.0 broadcast 10.5.0.31 gateway 10.5.0.1

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

< Previous Page | 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947  | Next Page >