Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 649/1523 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Restrict number of characters to be typed for af:autoSuggestBehavior

    - by Arunkumar Ramamoorthy
    When using AutoSuggestBehavior for a UI Component, the auto suggest list is displayed as soon as the user starts typing in the field. In this article, we will find how to restrict the autosuggest list to be displayed till the user types in couple of characters. This would be more useful in the low latency networks and also the autosuggest list is bigger. We could display a static message to let the user know that they need to type in more characters to get a list for picking a value from. Final output we would expect is like the below image Lets see how we can implement this. Assuming we have an input text for the users to enter the country name and an autosuggest behavior is added to it. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior /> </af:inputText> Also, assuming we have a VO (we'll name it as CountryView for this example), with a view criteria to filter out the VO based on the bind variable passed. Now, we would generate View Impl class from the java node (including bind variables) and then expose the setter method of the bind variable to client interface. In the View layer, we would create a tree binding for the VO and the method binding for the setter method of the bind variable exposed above, in the pagedef file As we've already added an input text and an autosuggestbehavior for the test, we would not need to build the suggested items for the autosuggest list.Let us add a method in the backing bean to return us List of select items to be bound to the autosuggest list. padding: 5px; background-color: #fbfbfb; min-height: 40px; width: 544px; height: 168px; overflow: auto;"> public List onSuggest(String searchTerm) { ArrayList<SelectItem> selectItems = new ArrayList<SelectItem>(); if(searchTerm.length()>1) { //get access to the binding context and binding container at runtime BindingContext bctx = BindingContext.getCurrent(); BindingContainer bindings = bctx.getCurrentBindingsEntry(); //set the bind variable value that is used to filter the View Object //query of the suggest list. The View Object instance has a View //Criteria assigned OperationBinding setVariable = (OperationBinding) bindings.get("setBind_CountryName"); setVariable.getParamsMap().put("value", searchTerm); setVariable.execute(); //the data in the suggest list is queried by a tree binding. JUCtrlHierBinding hierBinding = (JUCtrlHierBinding) bindings.get("CountryView1"); //re-query the list based on the new bind variable values hierBinding.executeQuery(); //The rangeSet, the list of queries entries, is of type //JUCtrlValueBndingRef. List<JUCtrlValueBindingRef> displayDataList = hierBinding.getRangeSet(); for (JUCtrlValueBindingRef displayData : displayDataList){ Row rw = displayData.getRow(); //populate the SelectItem list selectItems.add(new SelectItem( (String)rw.getAttribute("Name"), (String)rw.getAttribute("Name"))); } } else{ SelectItem a = new SelectItem("","Type in two or more characters..","",true); selectItems.add(a); } return selectItems; } So, what we are doing in the above method is, to check the length of the search term and if it is more than 1 (i.e 2 or more characters), the return the actual suggest list. Otherwise, create a read only select item new SelectItem("","Type in two or more characters..","",true); and add it to the list of suggested items to be displayed. The last parameter for the SelectItem (boolean) is to make it as readOnly, so that users would not be able to select this static message from the displayed list. Finally, bind this method to the input text's autosuggestbehavior's suggestedItems property. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior suggestedItems="#{AutoSuggestBean.onSuggest}"/> </af:inputText>

    Read the article

  • Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • Tar exclude not working?

    - by Andrew Fashion
    tar -cvf file.tar --exclude=thumbs/ \ --exclude=uploads_event/ \ --exclude=uploads_forum/ \ --exclude=uploads_admin/ \ --exclude=uploads_userpoints/ \ --exclude=uploads_group/ \ --exclude=up_old/ \ --exclude=uploads_user/ \ --exclude=uploads_wall/ \ directory_to_tar/ Do I have it wrong? Trying to tar entire directory but exclude all those directories and any files in those folders completely.

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Configuring the SOA Human Task Hostname by Antonis Antoniou

    - by JuergenKress
    When a human task is opened in BPM Workspace, it will try by default to connect to either localhost or the server's alias. So if you try to access the BPM Workspace remotely (from a computer other than where Oracle SOA is running) you will get an http error (unable to connect). You can fix this issue at run-time using the Enterprise Manager (EM). Login to EM and from the farm navigator select your composite by expanding the "SOA", "soa-infra" and your partition node. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Human task,Antonis Antoniou,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • High Availability Configuration using Heartbeat and Pacemaker

    - by pradeepchhetri
    I have the following setup: I have configured high availability between two load balancers (HAProxy) so that if HAProxy1 get down, the floating IP gets transferred to the other load balancer HAProxy2, hence all the clients will get the response from HAProxy2, which at the back-end is doing LB among the sme two webserver. This is for removing the single point of failure in case of only one HAProxy. Whenever I stops the hearbeat in HAProxy1, the floating IP goes to HAProxy2. But I want to configure such that whenever the process haproxy goes down, the floating IP should get assigned to HAProxy2. Can someone tell me how to implement it ?

    Read the article

  • SMTP for multiple domains on virtual interfaces

    - by Pawel Goscicki
    The setup is like this (Ubuntu 9.10): eth0: 1.1.1.1 name.isp.com eth0:0 2.2.2.2 example2.com eth0:1 3.3.3.3 example3.com example2.com and example3.com are web apps which need to send emails to their users. 2.2.2.2 points to example2.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. 3.3.3.3 points to example3.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. Requirements: Local delivery must be disabled (must deliver to MX specified server), so that the following works (note that there is no local user bob on the machine, but there is an existing bob email user): echo "Test" | mail -s "Test 6" [email protected] I need to be able to specify from which IP/domain name the email is delivered when sending an email. I fought with sendmail. With not much luck. Here's some debug info: sendmail -d0.12 -bt < /dev/null Canonical name: name.isp.com UUCP nodename: host a.k.a.: example2.com a.k.a.: example3.com ... Sendmail always uses canonical name (taken from eth0). I've found no way for it to select one of the UUCP codenames. It uses it for sending email: echo -e "To: [email protected]\nSubject: Test\nTest\n" | sendmail -bm -t -v [email protected]... Connecting to [127.0.0.1] via relay... 220 name.isp.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9ubuntu1; Wed, 31 Mar 2010 16:33:55 +0200; (No UCE/UBE) logging access from: localhost(OK)-localhost [127.0.0.1] >>> EHLO name.isp.com I'm ok with other SMTP solutions. I've looked briefly at nbsmtp, msmtp and nullmailer but I'm not sure thay can deal with disabling local delivery and selecting different domains when sending emails. I also know about spoofing sender field by using mail -a "From: <[email protected]>" but it seems to be a half-solution (mails are still sent from isp.com domain instead of proper example2.com, so PTR records are unused and there's more risk of being flagged as spam/spammer).

    Read the article

  • Sub-process /usr/bin/dpkg returned an error code (1)

    - by rohit
    Hey friends i am getting the following error when i am trying to purge shorewall root@aptosid:/etc# apt-get purge shorewall Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: shorewall* 0 upgraded, 0 newly installed, 1 to remove and 3 not upgraded. 1 not fully installed or removed. After this operation, 1,843 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 212702 files and directories currently installed.) Removing shorewall ... : not found/shorewall: 25: /etc/default/shorewall: :q Stopping "Shorewall firewall": not done (check /var/log/shorewall-init.log). invoke-rc.d: initscript shorewall, action "stop" failed. dpkg: error processing shorewall (--purge): subprocess installed pre-removal script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: shorewall E: Sub-process /usr/bin/dpkg returned an error code (1) root@aptosid:/etc# please help me out ...........?

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • collectd:Monitoring server not showing clients

    - by Quintin Par
    I have setup a monitoring server with the following setup. <Plugin network> Listen "0.0.0.0" "25826" </Plugin> Now my clients are sending data to the monitoring server(verified through tcpdump). Even the collection folder shows that the data is being dumped /var/lib/collectd/rrd [ec2-user at x rrd]$ ll total 4 drwxr-xr-x 11 root root 4096 Nov 20 17:53 x-web-1.y.com [ec2-user at x rrd]$ I have also verified with find . -mmin 1 to see if its being constantly updated. [ec2-user@x rrd]$ find . -mmin 1 ./x-web-1.y.com/interface-eth0/if_errors.rrd ./x-web-1.y.com/interface-eth0/if_packets.rrd ./x-web-1.y.com/interface-eth0/if_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_time.rrd ./x-web-1.y.com/disk-xvda1/disk_ops.rrd ./x-web-1.y.com/disk-xvda1/disk_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_merged.rrd But when i look it up through collectd-web, I don't see the clients What might be wrong in my setup?

    Read the article

  • Automatically mounting windows share in Fedora 12

    - by user15865
    Hi, I'm trying to automatically mount a windows share in a Fedora 12 instance (FC12). When I manually mount things work: mount -t cifs //nas01/servers -o username=guest,password=myPassword /mnt/nas01/servers If I update /etc/fstab with the following: //nas01/servers /mnt/nas01/servers cifs username=guest,password=myPassword 0 0 Nothing happens after reboot. The thing that has me baffled is after a reboot if I run: mount -a The share is mounted. Any ideas on this? Thank you, Martin

    Read the article

  • zabbix monitoring mysql database

    - by krisdigitx
    I have a server running multiple instances of mysql and also has the zabbix-agent running. In zabbix_agentd.conf i have specified: UserParameter=multi.mysql[*],mysqladmin --socket=$1 -uzabbixagent extended-status 2>/dev/null | awk '/ $3 /{print $$4}' where $1 is the socket instance. From the zabbix server i can run the test successfully. zabbix_get -s ip_of_server -k multi.mysql[/var/lib/mysql/mysql2.sock] and it returns all the values However the zabbix item/trigger does not generate the graphs, I have created a MACRO for $1 which is the socket location {$MYSQL_SOCKET1} = '/var/lib/mysql/mysql2.sock' and i use this key in items to poll the value multi.mysql[{$MYSQL_SOCKET1},Bytes_sent] LOGS: this is what i get on the logs: 3360:20120214:144716.278 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_received]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters 3360:20120214:144716.372 item [multi.mysql['/var/lib/mysql/mysql2.sock',Bytes_sent]] error: Special characters '\'"`*?[]{}~$!&;()<>|#@' are not allowed in the parameters Any ideas where the problem could be? FIXED {$MYSQL_SOCKET1} = /var/lib/mysql/mysql2.sock i removed the single quotes from the line and it worked...

    Read the article

  • Convert KVM virtual machine to LXC container

    - by linkdd
    I have 2 virtual machines (with Debian, using KVM) with virtual hard drives: /srv/kvm/ssh.img /srv/kvm/www.img Both have 3 partitions (/, /home, swap). I want to convert them in a RootFS usable with LXC (in order to use LXC instead of KVM). The only solution I have for the moment is: create a new RootFS copy /home partition into it reproduce the same configuration into it But is there an automated way to do it

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

  • Is this a HPC or HA mySQL cluster?

    - by Louise Hoffman
    Can someone tell me if this is a High Performance Compute or High Available mySQL cluster? There is a picture of the setup. This is part of the config.ini they talk about [ndbd default] NoOfReplicas=2 # Number of replicas Is it correct understood that NoOfReplicas determines if I have a HPC or a HA cluster?

    Read the article

  • activate wireless chip on ubuntu

    - by Charles
    I am currently using ubuntu 10.10 on my laptop , the model of my wireless chip is broadcom BCM4312 802.11b/g LP-PHY . I tried to activate the driver for the chip by clicking menu bar System - Administration - Additional Drivers . and activated Broadcom STA wireless driver. But the laptop can't detect any wireless signal still. Do I have to do any additional work to make the chip work ? Or how can I test if there is physical damage to the chip itself

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • What can go wrong with a GLIBC upgrade?

    - by Sevenless
    I recently installed a piece of software that my group needs for a research project starting next September. Turns out the software has a known crash bug when used with glibc 2.12.1. My boss asked if we can upgrade glibc on the server that's supposed to run it. Cue my skeptical silence.... At some point, I got it into my brain that messing with glibc was about as good an idea as messing with a hungry puma; however, I've been unable to determine the source of this belief. So, if I go ahead with this: Am I doing something flagrantly stupid (e.g. I won't fix my problem, I will brick my server, or I will initiate a zombie apocalypse)? What can go wrong? What is likely to go wrong? How do I avoid the answers to 2 and 3?

    Read the article

  • Website memory problem

    - by Toktik
    I have CentOS 5 installed on my server. I'm in VPS server. I have site where I have constant online ~150. First look on site looks OK. But when I go through links, sometimes I receive Out of memory PHP error. It looks like this Fatal error: Out of memory (allocated 36962304) (tried to allocate 7680 bytes) in /home/host/public_html/sites/all/modules/cck/modules/fieldgroup/fieldgroup.install on line 100 And always, not allocated memory is very small. In average I have 30% CPU load, 25% RAM load. So I think here is not a physical memory problem. My PHP memory limit was set to 1500MB. My apache error log looks like this [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:49:00 2010] [error] [client 91.204.190.5] File does not exist: /home/host/public_html/favicon.ico Past I have not met with this on my server and the problem appeared itself. Besides this I'm receiving some server errors on mail. cpsrvd failed @ Fri Sep 24 16:45:20 2010. A restart was attempted automagically. Service Check Method: [tcp connect] Failure Reason: Unable to connect to port 2086 Same for tailwatchd. Support tried, and can't help me...

    Read the article

  • When I restart my LXC environment, the container does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • hosts.deny not blocking ip addresses

    - by Jamie
    I have the following in my /etc/hosts.deny file # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL:ALL and this in /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx but i am still getting lots of these emails: Time: Thu Feb 10 13:39:55 2011 +0000 IP: 202.119.208.220 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work. right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >