Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 746/1523 | < Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >

  • How do you get autofs and updatedb to work together?

    - by Veek.M
    /etc/my.misc sda1 -fstype=ntfs,user,exec :/dev/sda1 sda3 -fstype=ntfs,user,exec :/dev/sda3 sda4 -fstype=ntfs,user,exec :/dev/sda4 /etc/auto.master /my /etc/my.misc --ghost When I run locate .pdf, I get nothing because though the mount points (sda1, sda2, ..) are created in /my - there's nothing in them till I access them. Unfortunately this is not good enough for updatedb and it purges its cache of /my/sdaX files. How do I prevent/solve this problem?

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • SDcard /dev/sdb2 is apparently in use by the system; will not make a filesystem here

    - by user171223
    I divided my sdcard into 2 partitions, but It got an error and couldn't create a new partition. Error: /dev/sdb2 is apparently in use by the system; will not make a filesystem here! My /dev/sdb was not mounted, and the output of command lsblk was: cxphong@cxphong:~/Desktop$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 118.8G 0 part +-sda2 8:2 0 147.7G 0 part /media/DATA +-sda3 8:3 0 137.1G 0 part +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 1023M 0 part [SWAP] +-sda6 8:6 0 61.2G 0 part / sdb 8:16 1 3.7G 0 disk +-sdb1 8:17 1 70.6M 0 part +-sdb2 8:18 1 3.6G 0 part +-sdb1 (dm-0) 252:0 0 70.6M 0 part +-sdb2 (dm-1) 252:1 0 3.6G 0 part I couldn't delete /dev/sdb1 (dm-0) & /dev/sdb2 (dm-1). What are they?

    Read the article

  • Openmeetings: problem in running: "Address already in use "

    - by takpar
    hi, i am trying to run openmeetings in my CentOS vps. when i run $ ./red5.sh after a lot of lines it says: Bootstrap Complete and a few lines before it it says: Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind(Native Method) ... i have tried red5.sh with root and a nomral user. both gives error like that. any suggestion?

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • Binding MySQL to run from the public or private LAN IP address - which one is faster

    - by Lamin Barrow
    So we have 2 servers all running at the same web host. We have bind MySQL to listen on the public ip-address of the database server and the web server connects to it from the public ip. Both servers run on the same private network. Currently, the DB connect method from our php script takes about 3ms to connect to the MySQL database server host. My question is, would MySql data interaction from the web server be faster if we bind it to listen on the private lan address on the database server instead of the public IP? or is it the same regardless and it wont make a different.

    Read the article

  • rsnapshot stats

    - by Obscur Moirage
    I'd like to retrieve the following stats from rsnapshot files synced added files modded files deleted files Is there a feature to retrieve these in rsnapshot, or is there another product that's able to do it? EDIT: As requested, I'll try to show that I'm not just asking what I want to do without any research. I wasn't able to locate any rsnapshot feature doing this. Maybe I'm searching in a wrong direction. So, I've built a not very pretty script, called each time before rsnapshot is ran. This Perl script stores each file MD5, in order to compare backup files structures between rsnapshot updates. I'm pretty sure it's worthless to show this code here. I think that keeping an eye on what change on a server, for example, is a useful feature. So, I'm asking. @pauska Most of the time, I'm trying to search for an answer myself, which is not the case here. Thanks

    Read the article

  • Script to run chown on all folders and setting the owner as the folder name minus the trailing /

    - by Shikoki
    Some numpty ran chown -R username. in the /home folder on our webserver thinking he was in the desired folder. Needless to say the server is throwing a lot of wobbelys. We have over 200 websites and I don't want to chown them all individually so I'm trying to make a script that will change the owner of all the folders to the folder name, without the trailing /. This is all I have so far, once I can remove the / it will be fine, but I'd also like to check if the file contains a . in it, and if it doesn't then run the command, otherwise go to the next one. #!/bin/bash for f in * do test=$f; #manipluate the test variable chown -R $test $f done Any help would be great! Thanks in advance!

    Read the article

  • can't run binaries or shell scripts

    - by hyperboreean
    I am running Debian testing and I am not able to run any binary or shell script. I keep getting "No such file or directory". The umask is the default one and I haven't fooled around with the paths. Also, I am aware of this question, but it doesn't work out for me - I compiled my code on this machine and trying to run it on the same machine. Also, all of my shell scripts have the correct shebang. Any advices?

    Read the article

  • Hardware recommendations for building an Ubuntu encrypted file server

    - by Robert Mashlan
    I would like to build a file server for my home network using Ubuntu. It will serve files from RAID1 configured disks, either in the OS or in hardware. It will be connected to a Gigabit ethernet LAN. The disks will use an encrypted file system. It will serve samba shares. I would like a recommendation on what kind of processing power/memory I would need to build a box that would be able to sustain the full capacity of the Gigabit ethernet connection in a file transfer for a single connection with the overhead of serving from an encrypted disk. I'm not looking to build a dream server, I just want enough processing capacity for high performance (and reliable) file sharing and spend as little as possible for it. This may be tangential, but what kind of hardware would I need to have a server be able to reliably go into a low power mode when no requests are being made of it?

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Lesetipp: Der TDAzlan Exadata Value Guide

    - by Alliances & Channels Redaktion
    Fast 20 Jahre sind Oracle und TDAzlan schon erfolgreiche Partner. Als VAD unterstützt TDAzlan Oracle Partner bei deren Business, zum Beispiel durch Sales- und Marketing-Support, Webcasts, Zertifizierungstrainings oder Enablement Workshops bis hin zur Spezialisierung. Jetzt wurde der Exadata Value Guide von Azlan für Partner und Kunden aktualisiert auf aktuelle 12c und X4 Technologien. Die Broschüre bündelt Fakten und Tipps rund um die Themen Exadata und Engineered Systems. Sie schlüsselt technische Details zu Oracle Software- und Hardware-Komponenten auf und bietet Verkaufsargumente sowie hilfreiche Erläuterungen zum Salesprozess. Außerdem wird das Oracle Authorized Solution Center (OASC) von TDAzlan in München vorgestellt. Dort haben Partner die Möglichkeit, Oracle Gesamtlösungen zu testen und Ihren Kunden zu präsentieren. Den Exadata Value Guide können sie auch auf der Oracle Informations-Webseite von TDAzlan herunterladen.

    Read the article

  • What's wrong with this iptable rule?

    - by warl0ck
    I run dnsmasq locally as a cache server, in the old days, I allow all INPUT packets from lo+, and set policy of INPUT to DROP: -A INPUT -i lo+ -j ACCEPT Now I decide to put this on the raw table to speed up rules matching, -A PREROUTING -i lo+ -j ACCEPT But that doesn't work as expected. Why? Since the packets get processed by the raw table first, then nat, then filter, why isn't that rule work the same as the old one?

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • Is there such a thing as a persistent ram drive?

    - by Linus
    I have a laptop with a LAMP setup. The HDD is slow, which causes my unit tests to run slowly. I was wondering whether I could mount the web root the mysql database on some kind of ramdisk. From what I have read of ramdisks, they are non-persistent. Is there anyway to create a ramdisk that writes changes to an area of the hdd when shutting down and re-mounts the ramdisk on bootup?

    Read the article

  • Where is the root

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My question is: Where is this "big tree" located?

    Read the article

  • can i use an ip-list include file for iptable blacklisting

    - by rubo77
    I would like to block all countries except mine in iptables, that is a lits with about 100.000 Entries. how can i define this blacklistfile in a script, so iptables blocks all those ip-ranges? maybe i can use http://www.ipdeny.com/ipblocks/data/countries/ that provides lists in the form 117.55.192.0/20 117.104.224.0/21 119.59.80.0/21 121.100.48.0/21 ... i want to be able to change the blacklistfile easily without having to change the iptables-script

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

< Previous Page | 742 743 744 745 746 747 748 749 750 751 752 753  | Next Page >