Search Results

Search found 111890 results on 4476 pages for 'git update server info'.

Page 2011/4476 | < Previous Page | 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018  | Next Page >

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • how to initialize two logical drives on a HP P400i controller without reboot

    - by John
    What I am trying to do is initialize two logical drives on a HP P400i embedded controller without a reboot of the system here my current Array config: array A (SAS, Unused Space: 0 MB) logicaldrive 1 (17.9 GB, RAID 5, OK) logicaldrive 2 (17.9 GB, RAID 5, OK) logicaldrive 3 (75.9 GB, RAID 5, OK) logicaldrive 4 (25.0 GB, RAID 5, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 72 GB, OK) array B (SAS, Unused Space: 0 MB) logicaldrive 5 (99 MB, RAID 0, OK) logicaldrive 6 (68.2 GB, RAID 0, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 72 GB, OK) windows 2003 machine running the HpCISs2.sys driver version 6.20.0.32 . I have the ACU and ACU CLI tools installed version 8.28.13.0, P400i firmware version 2.74 . Now what I'd like to do is removes the physical drive 1I:1:4 and delete the two logical drives in array B. then insert a new drive in to bay 4 that contains two new logical drives and have them show up in array B again. So far after I remove the drive and delete the failed logical drives, I insert the new drive and run HPacucli rescan. I get the new drive to show up as unassinged physical drive but I cant figure out now to "for lack of a better word" mount the 2 logical drives on the new unassinged disk. If I reboot the system the array controller picks up the new fourth drive and creates Array B with the drives without problem but I'd really like to not have to reboot the server. Any ideas?

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • Is it worthwhile to block malicious crawlers via iptables?

    - by EarthMind
    I periodically check my server logs and I notice a lot of crawlers search for the location of phpmyadmin, zencart, roundcube, administrator sections and other sensitive data. Then there are also crawlers under the name "Morfeus Fucking Scanner" or "Morfeus Strikes Again" searching for vulnerabilities in my PHP scripts and crawlers that perform strange (XSS?) GET requests such as: GET /static/)self.html(selector?jQuery( GET /static/]||!jQuery.support.htmlSerialize&&[1, GET /static/);display=elem.css( GET /static/.*. GET /static/);jQuery.removeData(elem, Until now I've always been storing these IPs manually to block them using iptables. But as these requests are only performed a maximum number of times from the same IP, I'm having my doubts if it does provide any advantage security related by blocking them. I'd like to know if it does anyone any good to block these crawlers in the firewall, and if so if there's a (not too complex) way of doing this automatically. And if it's wasted effort, maybe because these requests come from from new IPs after a while, if anyone can elaborate on this and maybe provide suggestion for more efficient ways of denying/restricting malicious crawler access. FYI: I'm also already blocking w00tw00t.at.ISC.SANS.DFind:) crawls using these instructions: http://spamcleaner.org/en/misc/w00tw00t.html

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • Enable PostBack for a ASP.NET User Control

    - by Steven
    When I click my "Query" the values for my user controls are reset. How do I enable PostBack for my user control? myDatePicker.ascx <%@ Control Language="vb" CodeBehind="myDatePicker.ascx.vb" Inherits="Website.myDate" AutoEventWireup="false" %> <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> <asp:TextBox ID="DateTxt" runat="server" ReadOnly="True" /> <asp:Image ID="DateImg" runat="server" ImageUrl="~/Calendar_scheduleHS.png" EnableViewState="True" EnableTheming="True" /> <asp:CalendarExtender ID="DateTxt_CalendarExtender" runat="server" Enabled="True" TargetControlID="DateTxt" PopupButtonID="DateImg" DefaultView="Days" Format="ddd MMM dd, yyyy" EnableViewState="True"/> myDatePicker.ascx Partial Public Class myDate Inherits System.Web.UI.UserControl Public Property SelectedDate() As Date? Get Dim o As Object = ViewState("SelectedDate") If o = Nothing Then Return Nothing End If Return Date.Parse(o) End Get Set(ByVal value As Date?) ViewState("SelectedDate") = value End Set End Property End Class Default.aspx <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Default.aspx.vb" Inherits="Website._Default" EnableEventValidation="false" EnableViewState="true" %> <%@ Register TagPrefix="my" TagName="DatePicker" Src="~/myDatePicker.ascx" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajax" %> <%@ Register Assembly="..." Namespace="System.Web.UI.WebControls" TagPrefix="asp" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> ... <body> <form id="form1" runat="server"> <div class="mainbox"> <div class="query"> Start Date<br /> <my:DatePicker ID="StartDate" runat="server" EnableViewState="True" /> End Date <br /> <my:DatePicker ID="EndDate" runat="server" EnableViewState="True" /> ... <div class="query_buttons"> <asp:Button ID="Button1" runat="server" Text="Query" /> </div> </div> <asp:GridView ID="GridView1" ... > </form> </body> </html> Default.aspx.vb Imports System.Web.Services Imports System.Web.Script.Services Imports AjaxControlToolkit Protected Sub Page_Load(ByVal sender As Object, _ ByVal e As EventArgs) Handles Me.Load End Sub Partial Public Class _Default Inherits System.Web.UI.Page Protected Sub Button1_Click(ByVal sender As Object, _ ByVal e As EventArgs) Handles Button1.Click GridView1.DataBind() End Sub End Class

    Read the article

  • What is stopping postfix from delivering mail to the local transport agent?

    - by Dark Star1
    I have the following settings ( as grabbed from my postconf -n output) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d minimal_backoff_time = 1000s mydestination = $mydomain, localhost.$mydomain, localhost myhostname = //redacted mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_hard_error_limit = 12 smtpd_recipient_limit = 10 smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf, mysql:/etc/postfix/mysql_virtual_alias_domainaliases_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_domains_maps.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf, mysql:/etc/postfix/mysql_virtual_mailbox_domainaliases_maps.cf virtual_transport = virtual virtual_uid_maps = static:5000 postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_overquota_bounce=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_maps=mysql:/etc/postfix/mysql_virtual_mailbox_limit_maps.cf postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_maildir_limit_message=Sorry, the your maildir has overdrawn your diskspace quota, please free up some of spaces of your mailbox try again. postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_create_maildirsize=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_extended=yes postconf: warning: /etc/postfix/main.cf: unused parameter: virtual_mailbox_limit_override=yes postconf: warning: /etc/postfix/main.cf: unused parameter: smtpd_relay_restrictions=reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unauth_destination, check_policy_service inet:127.0.0.1:10023, permit I am nwe to mail server configurations but as I understand it from this message: status=deferred (mail transport unavailable) It means it can't deliver to the LDA. I am using postifx 2.9.6 on ubuntu 12.04 with dovecot 2.0.19

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Opening an existing process

    - by Grasper
    I am using Eclipse in Linux through a remote connection (xrdp). My internet got disconnected, so I got disconnected from the server while eclipse was running. Now I logged in again, and I do the "top" command I can see that eclipse is running and still under my user name. Is there some way I can bring that process back into my view (I do not want to kill it because I am in the middle of checking in a large swath of code)? It doesnt show up on the bottom panel after I logged in again. Here is the "top" output: /home/mclouti% top top - 08:32:31 up 43 days, 13:06, 29 users, load average: 0.56, 0.79, 0.82 Tasks: 447 total, 1 running, 446 sleeping, 0 stopped, 0 zombie Cpu(s): 6.0%us, 0.7%sy, 0.0%ni, 92.1%id, 1.1%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 3107364k total, 2975852k used, 131512k free, 35756k buffers Swap: 2031608k total, 59860k used, 1971748k free, 817816k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13415 mclouti 15 0 964m 333m 31m S 21.2 11.0 83:12.96 eclipse 16040 mclouti 15 0 2608 1348 888 R 0.7 0.0 0:00.12 top 31395 mclouti 15 0 29072 20m 8524 S 0.7 0.7 611:08.08 Xvnc 2583 root 20 0 898m 2652 1056 S 0.3 0.1 139:26.82 automount 28990 postgres 15 0 13564 868 304 S 0.3 0.0 26:33.36 postgres 28995 postgres 16 0 13808 1248 300 S 0.3 0.0 6:54.95 postgres 31440 mclouti 15 0 3072 1592 1036 S 0.3 0.1 6:01.54 gam_server 1 root 15 0 2072 524 496 S 0.0 0.0 0:03.00 init 2 root RT -5 0 0 0 S 0.0 0.0 0:04.53 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:01.72 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:04.33 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/2

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • How to configure multiple virtual hosts for multiple users on Linux/Apache2.2

    - by authentictech
    I want to set up a virtual hosting server on Linux/Apache2.2 that allows multiple users to set up multiple website domains as would be appropriate for commercial shared hosting. I have seen examples (from my then perspective as a shared hosting customer) that allow users to store their web files in their user home directory with directories to correspond to the virtual host domain, e.g.: /home/user1/www/example1.com /home/user2/www/example2.com instead of using /var/www Questions: How would you configure this in your Apache configuration files? (Don't worry about DNS) Is this the best way to manage multiple virtual hosts? Are there others? What safety or security issues do you think I should be aware of in doing this? Many thanks, folks. Edit: If you want to only answer question 1, please feel free, as that is the most urgent to me at this moment and I would consider that an answer to the question. I have done it for myself since posting, but I am not confident that it's the best solution and I would like to know how an experienced sysadmin would do it. Thanks.

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • linux + create host file from CSV file by sed or awk or perl

    - by yael
    I have the following CSV file this file defined which Linux machine exist in the system and there ip's my target is to create host file from this file please advice how to create host file as example 1 from my CSV file ( I need to match the IP address from CSV file and put it on the first field of the host file , then match the LINUX name and locate this name in the sec field – as example 1 ) remark - should be performed by sed or awk or perl .. , I need to write the solution in my bash script CSV file , machine , VM-LINUX1 , SZ , Phy , 10.213.158.18 , PROXY , VM-LINUX2 , SZ , 10.213.158.19 , OLD HW , VM-LINUX3 , SZ , 10.213.158.20 , , VM-LINUX4 , SZ , Phy , 10.213.158.21 , , VM-LINUX5 , SZ , Phy , OUT , EXT , LAN3 , 10.213.158.22 , INTERNAL , VM-LINUX6 , SZ , Phy , 10.213.158.23 , , server , new HW , VM-LINUX7 , SZ , Phy , 10.213.158.24 , OUT, LAN3 , VM-LINUX8 , SZ , 10.213.158.25 , OLD HW , machine , VM-LINUX9 , SZ , Phy , INT , 10.213.158.26 , LAN2, AN45, , VM-LINUX10 , SZ , Phy , 10.213.158.27 , , VM-LINUX11 , SZ , Phy , LAN5 , 10.213.158.28 , example 1 ( host file ) 10.213.158.18 VM-LINUX1 10.213.158.19 VM-LINUX2 10.213.158.20 VM-LINUX3 10.213.158.21 VM-LINUX4 10.213.158.22 VM-LINUX5 10.213.158.23 VM-LINUX6 10.213.158.24 VM-LINUX7 10.213.158.25 VM-LINUX8 10.213.158.26 VM-LINUX9 10.213.158.27 VM-LINUX10 10.213.158.25 VM-MACHINE8 10.213.158.26 STAR9 10.213.158.27 TOP10 10.213.158.28 SERVER11

    Read the article

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • postfix is unable to send emails to external domains

    - by BoCode
    Whenever i try to send an email from my server, i get the following error: Nov 13 06:37:21 xyz postfix/smtpd[6730]:connect from unknown[a.b.c.d] Nov 13 06:37:21 xyz postfix/smtp[6729]: warning: host X.com[x.y.z.d]:25 greeted me with my own hostname xyz.biz Nov 13 06:37:21 xyz postfix/smtp[6729]: warning: host X.com[x.y.z.d]:25 replied to HELO/EHLO with my own hostname xyz.biz Nov 13 06:37:21 xyz postfix/smtp[6729]: 2017F1B00C54: to=<[email protected]>, relay=X.com[x.y.z.d]:25, delay=0.98, delays=0.17/0/0.81/0, dsn=5.4.6, status=bounced (mail for X.com loops back to myself) this is the output of postconf -n: address_verify_poll_delay = 1s alias_database = hash:/etc/aliases alias_maps = body_checks_size_limit = 40980000 command_directory = /usr/sbin config_directory = /etc/postfix connection_cache_ttl_limit = 300000s daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 1 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 default_delivery_slot_cost = 2 default_destination_concurrency_limit = 10 default_destination_recipient_limit = 1 default_minimum_delivery_slots = 3 default_process_limit = 10000 default_recipient_refill_delay = 1s default_recipient_refill_limit = 10 disable_dns_lookups = yes enable_original_recipient = no hash_queue_depth = 2 home_mailbox = Maildir/ html_directory = no in_flow_delay = 0 inet_interfaces = all inet_protocols = ipv4 initial_destination_concurrency = 100 local_header_rewrite_clients = mail_owner = postfix mailq_path = /usr/bin/mailq manpage_directory = /usr/share/man master_service_disable = milter_default_action = accept milter_protocol = 6 mydestination = $myhostname, localhost.localdomain, localhost, $mydomain mydomain = xyz.biz myhostname = xyz.biz mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases non_smtpd_milters = $smtpd_milters qmgr_message_active_limit = 500 qmgr_message_recipient_limit = 500 qmgr_message_recipient_minimum = 1 queue_directory = /var/spool/postfix queue_run_delay = 300s readme_directory = /usr/share/doc/postfix.20.10.2/README_FILE receive_override_options = no_header_body_checks sample_directory = /usr/share/doc/postfix.2.10.2/examples sendmail_path = /usr/sbin/sendmail service_throttle_time = 1s setgid_group = postdrop smtp_always_send_ehlo = no smtp_connect_timeout = 1s smtp_connection_cache_time_limit = 30000s smtp_connection_reuse_time_limit = 30000s smtp_delivery_slot_cost = 2 smtp_destination_concurrency_limit = 10000 smtp_destination_rate_delay = 0s smtp_destination_recipient_limit = 1 smtp_minimum_delivery_slots = 1 smtp_recipient_refill_delay = 1s smtp_recipient_refill_limit = 1000 smtpd_client_connection_count_limit = 200 smtpd_client_connection_rate_limit = 0 smtpd_client_message_rate_limit = 100000 smtpd_client_new_tls_session_rate_limit = 0 smtpd_client_recipient_rate_limit = 0 smtpd_delay_open_until_valid_rcpt = no smtpd_delay_reject = no smtpd_discard_ehlo_keywords = silent-discard, dsn smtpd_milters = inet:127.0.0.1:8891 smtpd_peername_lookup = no unknown_local_recipient_reject_code = 550 what could be the issue?

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • SSH traffic over openvpn connection freezes when I cat a file

    - by user42055
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. I compiled openvpn 2.1.1 on my mac and copied over my config directory from my gentoo client. The mac connected and ssh sessions worked fine without freezing. I then compiled it on my older gentoo box (2.6.26 kernel) which I am retiring due to a dying hard drive, and ssh over it also works perfectly. Why does it fail on my brand new gentoo box ? I've tried compiling three different kernels in case it was that, but other than that there should be no difference between my older and my newer gentoo boxes that I can think of. Any suggestions on what's wrong ?

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

< Previous Page | 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018  | Next Page >