Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 716/1523 | < Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >

  • winbind not working

    - by Yon
    I'm trying to set up winbind with an Active Directory running on Win2003. This works: net rpc user -S SOMEDOMAIN -U Administrator Password: Administrator ASPNET Demo Guest IUSR_SERVER20 IWAM_SERVER20 krbtgt RemoteUser SUPPORT_388945a0 This does not: wbinfo -u Error looking up domain users From the winbindd log: [2012/05/31 16:45:38, 1] nsswitch/winbindd_ads.c:ads_cached_connection(128) ads_connect for domain SOMEDOMAIN failed: Operations error [2012/05/31 16:46:38, 1] nsswitch/winbindd_util.c:trustdom_recv(230) Could not receive trustdoms ADS is not working with this domain. Why is winbind trying to use it instead of RPC? How can I force it to use only RPC and for all of this to work?

    Read the article

  • Changed array composition, mdadm --detail still shows the old array size

    - by Prody
    I have a machine with 8 disks. I installed it with my hoster's install automation (it's OVH, I don't have physical access to it). The machine installed correctly, but it made an array that I wanted to change. It created a raid5 array across 5/8 disks and I've changed it to raid10 across 8 disks. I've done this by first --stopping the old array and then --creating the new array. It warned me that a previous array was there, but I chose to continue. So it created the array, spent 10ish hours syncing it and now that it's ready I get this strange behavior: When I fdisk p on it, I see the correct size. But when I mdadm --detail it I see the old array's size even tho I get the new composition and level. When I try to pvcreate on it, i get the old size again for some reason. Did I have to do something else? Did I miss something?

    Read the article

  • Apache conf for high trafic CMS with backend users?

    - by Annan
    I'm in the situation where a website is going to have a high number of web users and a few backend webmasters. Webmasters will upload images (+other high mem tasks) and this bumps up the memory allocation of the httpd child processes to 100-150mb. In order to stop swapping I'm currently setting MaxClients in httpd.conf to 20. However this lowers maximum simultaneous requests. Will this be a problem when the website goes live? What is the best configuration? Info: Drupal 6, PHP 5, Apache 2.2 (Prefork atm) I'm thinking about Worker MPM, two apache instances or low MaxRequestsPerChild.

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • Time Zone on WebLogic Server

    - by adejuanc
    In order to configure the time zone with WebLogic Server, use the following JVM startup command: -Duser.timezone=<timezone> For example, in the java arguments in the admin console at Environments -> Servers -> Servername -> - Server Start tab, configure the startup settings that Node Manager will use to start the particular server. For example: -Duser.timezone='America/Arizona' There are many different time zones, each with its own code. For a complete list please refer to : http://en.wikipedia.org/wiki/List_of_zoneinfo_time_zones For testing, you can run the following code on WLS with a JSP, servlet, or deploying the class: import java.util.Calendar; import java.util.TimeZone; public class TestTimeZone {  public static void main(String[] args) {    Calendar calendar = Calendar.getInstance();    TimeZone timeZone = calendar.getTimeZone();    System.out.println(" your Current TimeZone is : " + timeZone.getDisplayName());    System.out.println(" Time Zone id : "+ timeZone.getID());  } }

    Read the article

  • Issue changing mysql password on Debian

    - by Sean
    I installed mysql on my Debian server. I couldn't get into the database because it kept saying I put in the wrong password so I looked on the internet and found that I could log onto mysql using the command sudo mysql --defaults-file=/etc/mysql/debian.cnf From there I went typed use mysql;then mysql> UPDATE user SET password=PASSWORD('password') WHERE user='root'; Which I know switched the password because I typed the command select Host, User, Password from user; And it showed the encrypted characters had changed for all three of the root user categories. But I am still not able to login to mysql using mysql -u root -p

    Read the article

  • Why doesn't this cron work?

    - by Alex
    I do "crontab -e" and add the following line: 0 9 * * * /usr/bin/python /home/g1/g1/utils/statsEmail.py > /home/g1/log/statsemail.log But it doesn't work! Why? The script itself works. Also, the log is empty. My other command in crontab is this, and it works: 0 9 * * * /usr/bin/python /home/g1/g1/sphinx/updateall.py > /home/g1/log/updateall.log

    Read the article

  • Unable to use cloned VM, OpenSUSE, VirtualBox

    - by Kremchik
    I've cloned a VM and now while booting it I see a message: Trying manual resume from /dev/sda1 Invoking userspace resume from /dev/sda1 resume: libgcrypt version: 1.5.0 Trying manual resume from /dev/sda1 invoking in-kernel resume from /dev/sda1 Waiting for device /dev/disk/by-id/ata-VBOX_HARDDISK_.....-part2 to appear: ... Could not find /dev/disk/...-part2 Want me to fall back to /dev/disk/...-part2 (Y/n) If I press 'Y' it tries to boot again with failure, then exits to /bin/sh. If I press 'n' it exits to /bin/sh immediately. I've read a solution here: http://diggerpage.blogspot.com/2011/11/cannot-boot-opensuse-12-after-cloning.html but I don't understand how to access files on disk to edit /etc/fstab and /boot/grub/menu.lst?

    Read the article

  • Creating software raid on spare internal drives with Fedora

    - by Wizzard
    Hi there, I got two internal 80GB drives which are blank and just sitting in the case. I have tried googling for the steps or some info but I can only find out how to setup raid when I am first installing Fedora - not for doing when already setup. These are two new (old) drives, that are blank, the system is not on them so should really just be as simple as formating and then binding them to a raid - but can't find any information. Any clues?

    Read the article

  • sed syntax to remove xml

    - by mjb
    I'm trying to sanitize this output from it's metadata to plug this output into GreekTools, but I am getting stuck on sed. curl --silent www.brainyquote.com | egrep '(span class="body")|(span class="bodybold")' | sed -n '6p; 7p; ' | sed 's/\<*\>//g' [ex] <span class="body">Literature is news that stays news.</span><br> <span class="bodybold">Ezra Pound</span> Could someone help me along on this track?

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • Is it inefficient to have symbolic links to symbolic links?

    - by Ogre Psalm33
    We're setting up a series of Makefiles where we want to have a project-level include directory that will have symbolic links to sub-project-level include files. Many sub-project developers have chosen to have their include files also be symbolic links to yet another directory where the actual software is located. So my question is, is it inefficient to have a symbolic link to a symbolic link to another file (for, say, a C++ header that may be included dozens or more times during a compile)? Example directory tree: /project/include/ x_header1.h -> /project/src/csci_x/include/header1.h x_header2.h -> /project/src/csci_x/include/header2.h /project/src/csci_x/ include/ header1.h -> /project/src/csci_x/local_1/cxx/header1.h header2.h -> /project/src/csci_x/local_2/cxx/header2.h local_1/cxx/ module1.cpp header1.h local_2/cxx/ module2.cpp header2.h

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • Is there DBus command to set position of KDE panel?

    - by Liss
    I have single vertical KDE panel. When I switch from single monitor X screen back to triple monitor X screen (with xrandr) my KDE panel ends up on the right edge of middle monitor, instead of right edge of right monitor. Also, many windows are in wrong place after the switch, but I have a script which restores geometry of all windows (as is used to be in triple monitor state before I switched to single monitor mode), so this is not a problem. Unfortunately, it does not work for KDE panel - it stays in the wrong place. I guess I need to use DBus to restore KDE panel position (programmatically move it to right edge of right screen), I tried googling, but it seems it is not very well documented. Is there DBus command to set position of KDE panel?

    Read the article

  • Changes in the Maven Embedded GlassFish plugin

    - by Romain Grecourt
    The plugin changed its Maven coordinates (a.k.a GAV) over time:  version <= 3.1.1 available under org.glassfish:maven-glassfish-embedded-plugin version >= 3.1.2 available under org.glassfish.embedded:maven-glassfish-embedded-plugin The goal “glassfish-embedded:run” has changed its way of reading the deployment configuration in the latest version: 4.0.Projects using previous versions of the plugin will stop working with this goal. Here is an example of the “old behavior”: 1 2 3 4 5 6 7 8 9 10 11 12 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>3.1.2.2</version> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> </plugin> The new behavior is as follow: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>4.0</version> <configuration> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> <executions> <execution> <goals> <goal>deploy</goal> </goals> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> </configuration> </execution> </executions> </plugin> The new version looks for execution of the deploy goal and the associated configuration, when running the goal ‘run’. Both would allow you to run the latest version of the glassfish-embedded jar, you’d only need to add it as a plugin dependency: 1 2 3 4 5 6 7 8 9 10 <plugin> [...] <dependencies> <dependency> <groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version> </dependency> </dependencies> </plugin>

    Read the article

  • Can not open ports in iptables on CentOS 5??

    - by abszero
    I am trying to open up ports in CentOS's firewall and am having a terrible go at it. I have followed the "HowTo" here: http://wiki.centos.org/HowTos/Network/IPTables as well as a few other places on the Net but I still can't get the bloody thing to work. Basically I wanted to get two things working: VNC and Apache over the internal network. The problem is that the firewall is blocking all attempts to connect to these services. Now if I issue service iptables stop and then try to access the server via VNC or hit the webserver everything works as expected. However the moment I turn iptables back on all of my access is blocked. Below is a truncated version of my iptables file as it appears in vi -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5801 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6001 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT Really I would just be happy if I could get port 80 opened up for Apache since I can do most stuff via putty but if I could figure out VNC as well that would be cool. As far as VNC goes there is just a single/user desktop that I am trying to connect to via: [ipaddress]:1 Any help would be greatly appreciated!

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Preferred apache permissions for www files with several authors

    - by user1316464
    I can't for the life of me figure out how to design my permissions scheme for my apache files. My requirements seem pretty simple: Apache should have standard permissions of RX for Directories and R for files Web authors should have RWX for Directories and RW for files Don't want to give any access to "other" Want new files/folders to inherit the proper permissions Here are the schemes I've tried 570 for directories and 460 for files Owner: Apache Group: Webdev The problem here is that new files created by users int the Webdev group are owned by user:Webdev and Apache can't read them. If Apache were in the group Webdev then it would also have the wrong permissions (ie it would have Write permissions to files) 750 for directories and 640 for files Owner: Webdev Group: Apache (Webdev is a member of Apache) The problem here is that there is only one webdev account and I have multiple people who need access to contribute. In theory this would work with only one developer if Webdev were also a member of the Apache group. Any ideas?

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

< Previous Page | 712 713 714 715 716 717 718 719 720 721 722 723  | Next Page >