Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 553/1523 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • How do I set up an email server that automatically maintains a list of previous recipients?

    - by hsivonen
    I want to set up an email server with the following characteristics. What software (besides bogofilter and clamav that I'm naming) should I use and what HOWTOs should I read? The server should run some flavor of Linux that's as low-maintenance as possible and self-updates for security patches in a timely fashion. (Debian stable?) When email is sent, all the recipients are stored in the list of previous recipients maintained by the server. Scan incoming messages with clamav and treat as spam if it contains viruses. When email arrives (if it passed clamav), if the sender is on the list of previous recipients, bypass spam filter. If the List-Id header names a mailing list on a manually maintained list of known-clean mailing lists, bypass spam filter and deliver into a mailbox depending on the mailing list name. Email that wasn't from previous recipients, manually white listed domains or mailing lists gets filtered by bogofilter. Spam goes into a spam mailbox. Email considered to be ham should automatically be fed to bogofilter training as ham. Email considered to be spam (incl. messages with viruses) should be automatically fed to bogofilter training as spam. There should be mailboxes for false ham and false spam that an IMAP client can move email into so that the server retrains bogofilter appropriately. Email sending requires SMTP over SSL. Email reading requires IMAPS. Should I also want to use SpamAssassin in addition to bogofilter?

    Read the article

  • how to find out which servers are accessing Oracle Internet Directory ?

    - by mad sammy
    Hi, We have a OID which is maintaining data about various users. This OID is being accessed by many weblogic servers. Weblogic servers are getting authenticated using this LDAP, but when a particular server authentication fails it causes authentication process failure for all servers, so we want to track that specific server which is causing this error. Is there any facility to know which servers are using the OID or i would like to know that does OID maintains any LOGs of its usage for security purpose.. Thanks.

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • using a second computer as a mere screen/monitor in X (VNC?)

    - by lara michaels
    Hello My goal is to use three monitors with my Linux system. It is a laptop, so adding another video card is not the easiest solution. (I have investigated a number of such options: getting a docking station with a PCI slot, USB/Cardbus vga adapters, etc, and for the time being don't want to go that way.) I am wondering if using an older desktop+screen I have lying around as the third "monitor" might be the easiest solution, if only there is a way to get it to work as a seamless, integrated desktop. I was wondering if I can use VNC or perhaps X itself (?) to achieve the following: computer A is my main computer; it has all my files, etc. computer B is used just to display on an additional screen keyboard+mouse are connected to computer A use VNC or X to connect the two so that computer B shows a X screen that is just as if it was a third physical screen connected to computer A. I don't know if the last point is clear, but what I mean is that I would like to be able to: be able to have my window manager assign/move around virtual desktops on all three screens move windows back and forth between the screens attached to computer A and the screen of computer B be able to copy something in an app being shown on a screen of computer A and paste it into an app being shown on the screen attached to computer B access the filesystem on my main computer (A) when using applications that are being shown on the screen attached to computer B Basically, I would like X to treat computer B just like it was nothing but a third physical screen... Is this doable? : ) ~lara

    Read the article

  • Problems with 5.1 digital out on Ubuntu 12.04

    - by user895319
    I've recently bought a new PC, installed Ubuntu and am now unable to get 5.1 digital sound working. Simple analogue stereo works fine on both the front and rear connectors. On my old box I connected the coax connection from my soundcard to my surround sound amplifier, set Settings-Sound to "Digital Stereo Duplex" and it worked. My old soundcard doesn't fit in my new machine so I'm using the built-in sound hardware. I'm connecting the combination output socket on the back of the PC via the same cable to my surround amp as before. The MB is an MSI Global H61M-P31 with an RealTek ALC887 sound chip. When I go to Settings-Sound I only see "Headphone Built-in Audio" and "Analogue Output Built-in Audio" - no digitial options. The output from aplay -l is: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Hardware device with all software conversions While googling for ALC887 I've seen some references to "ALC887 -VD Analog" and some to "ALC887 -VD Digital". Does anyone know if I need to force it to chance mode somehow? It's worth mentioning that when I set the output to 5.1 digital surround in Windows 7 on the same machine I still don't get any sound so it's not a unique Linux problem. Thanks for any help.

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • Creating Routes using the second NIC in the box

    - by Aditya Sehgal
    OS: Linux I need some advice on how to set up the routing table. I have a box with two physical NIC cards eth0 & eth1 with two associated IPs IP1 & IP2 (both of the same subnet). I need to setup a route which will force all messages from IP1 towards IP3 (of the same subnet) to go via IP2. I have a raw socket capture program listening on IP2 (This is not for malicious use). I have set up the routing table as Destination Gateway Genmask Flags Metric Ref Use Iface IP3 IP2 255.255.255.255 UGH 0 0 0 eth1 If I try to specify eth0 while adding the above rule, I get an error "SIOCADDRT: Network is unreachable". I understand from the manpage of route that if the GW specified is a local interface, then that would be use as the outgoing interface. After setting up this rule, if i do a traceroute (-i eth0), the packet goes first to the default gateway and then to IP3. How do I force the packet originating from eth0 towards IP3 to first come to IP2. I cannot make changes to the routing table of the gateway. Please suggest.

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • What program sent which packet to the network [closed]

    - by Erik Johansson
    I would like to have a tcpdump like program that shows which program sent a specific packet, instead of just getting the port number. This is a generic problem I've had on and off sometimes when you have and old tcpdump file lying around you have no way to find what program was sending that data.. The solution in how i can identify which process is making UDP traffic on linux ? is an indication that I can solve this with auditd, dTrace, OProfile or SystemTap, but doesn't show how to do it. I.e. it doesn't show the source port of the program calling bind().. The problem I had was strange UDP packets, and since those ports are so short lived it took me a while to solve this issue. I solved this by running an ugly hack similar to: while true; date +%s.%N;netstat -panut;done So either a method better than this hack, a replacement for tcpdump, or some way to get this info from the kernel so I can patch tcpdump. EDIT: This was asked on superuser "tracking what programs sends to net", no good solution though.

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • Ubuntu purple splash screen with blinking pixels?

    - by joxnas
    I had ubuntu 9.10 I upgraded to 10.04 after solving some problems (freeze at boot). Since then, I don't have the ubuntu's logo showing up when I boot, but a purple screen with some blinking pixels. I didn't care much about it... but today my computer took too long at that screen (normally it was just 1/4 second, but today it was like a minute..). And it happened like 4 or 5 times in a row (Only at the 5th time I realised that it was not freezing up, but it simply would took more time) After a reboot, it is again 1/4 second of purple screen but I don't want this problem to return.. so I want to get rid of the purple screen (I think it is an indicator of the problem) Well, I already installed the graphic drivers (going to system admnistration hardware drivers). But it didn't solve anything. (I don't know if it is even related) I searched in google, found something old (2006) and I think it maybe has some relation with my problems .. http://ubuntuforums.org/archive/index.php/t-294692.html But couldn't understand the conversation (i'm a linux novice) Sorry for my horrible english.. I would appreciate any help! My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core 2 Duo

    Read the article

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • SQLSTATE[HY000]: General error: 2006 MySQL server has gone away

    - by Barkat Ullah
    Server details: RAM: 16GB HDD: 1000GB OS: Linux 2.6.32-220.7.1.el6.x86_64 Processor: 6 Core Please see the link below for my # top preview: I can often see the error mentioned in title in my plesk panel and my /etc/my.cnf configuration are as below: bind-address=127.0.0.1 local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql max_connections=20000 max_user_connections=20000 key_buffer_size=512M join_buffer_size=4M read_buffer_size=4M read_rnd_buffer_size=512M sort_buffer_size=8M wait_timeout=300 interactive_timeout=300 connect_timeout=300 tmp_table_size=8M thread_concurrency=12 concurrent_insert=2 query_cache_limit=64M query_cache_size=128M query_cache_type=2 transaction_alloc_block_size=8192 max_allowed_packet=512M [mysqldump] quick max_allowed_packet=512M [myisamchk] key_buffer_size=128M sort_buffer_size=128M read_buffer_size=32M write_buffer_size=32M [mysqlhotcopy] interactive-timeout [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid open_files_limit=8192 As my server httpd conf is set to /etc/httpd/conf.d/swtune.conf and the configuration is as below: at prefork.c: <IfModule prefork.c> StartServers 8 MinSpareServers 10 MaxSpareServers 20 ServerLimit 1536 MaxClients 1536 MaxRequestsPerChild 4000 </IfModule> If I run grep -i maxclient /var/log/httpd/error_log then I can see everyday this error: [root@u16170254 ~]# grep -i maxclient /var/log/httpd/error_log [Sun Apr 15 07:26:03 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting [Mon Apr 16 06:09:22 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting I tried to explain everything that I changed to keep my server okay, but maximum time my server is down. Please help me which parameter can I change to keep my server okay and my sites can load fast. It is taking too much time to load my sites.

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • Terminate child processes on ctrl-c

    - by jackweirdy
    In tiny core linux, I have the following script: #!/bin/sh # ~/.X.d/freerdp.sh rdp(){ while true do xfreerdp -f [IP Address] done } rdp & It's pretty simple; when X starts up and checks the .X.d directory (as is the case in tiny core) it finds and executes this script. The script starts up freerdp and keeps a connection open to the server by restarting it whenever it closes. As you can see from the rdp & line, the function is run in the background to allow X to continue its startup routine. The problem is that whenever I cancel X with a Ctrl-Alt-Backspace the rdp process doesn't die. I'm looking for a way to kill the process as soon as X finishes, either through: A) a script, executed on X closing, which kills the process or B) by modifying the script to check the return value of the xfreerdp command. NB - if the solution does check the return value, it must only end if the command fails to open the X display. For that reason, if you could point me to a reference for xfreerdp return values I'd be grateful.

    Read the article

  • Migrating Linux user data to Windows profiles automatically

    - by scott ryan
    I have what seems to be an incredibly simple problem with a very simple solution but I'm having some trouble connecting dots. I have an aging server running Ubuntu Server which hosts roaming profiles. I am switching to a Windows Server 2012 DS shortly. Users used to be named firstinitial.lastname and we are switching to firstname.lastname. I need to transfer things like favorites, documents, etc. from the roaming Linux profile to the user's local Windows profile. So, the way I think it'd work is by using a login script. I think I'd use a script to mount the Linux server's /home for each user, then do copy to various paths (documents, pictures, etc.). But, how do I automate this for each user that logs in? I'm working with a nonprofit, so doing this by hand would probably be out of their budget. I'm open to suggestions, though. What I want is basically Windows Easy Migration, but I'm fairly certain that won't work under Wine... (Kidding, I promise). Thanks!

    Read the article

  • Very Slow DSL (ethernet) speed

    - by Abhijit
    I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • User http does not have write permissions directory?

    - by dwieeb
    I have a bit of an odd set up, I think. I have groups for each domain my server hosts, and I add the user http to each domain group along with the users that should have access to the groups' domains. In my php script running from a directory 'public_html', I try creating a file: <?php $output = ""; print exec('touch test 2>&1', $output); But I get touch: cannot touch `test': Permission denied and the file is not created. But here, clearly stated, the group has all permissions on the directory: drwxrwxr-x 5 dwieeb example.com 1024 Feb 4 05:19 public_html And here are the permissions on the php file in public_html that is trying to use the exec function: -rw-rw-r-- 1 dwieeb example.com 59 Feb 4 05:19 test.php How is this possible if http is part of the example.com group (as seen from a cat on /etc/group) and the directory has full permissions for the group? ... example.com:x:1000:dwieeb,http I'm stumped. EDIT (since apparently I'm not cool enough to answer my own questions yet): Ah, I found the problem. Yes, I restarted Nginx, but the php-fpm daemon must be restarted as well when http is added to the group for my domain. On Arch Linux: rc.d restart php-fpm

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Setting up SSL on JBoss 5

    - by socal_javaguy
    How can I enable SSL on JBoss 5 on a Linux (Red Hat - Fedora 8) box? What I've done so far is: (1) Create a test keystore. (2) Placed the newly generated server.keystore in $JBOSS_HOME/server/default/conf (3) Make the following change in the server.xml in $JBOSS_HOME/server/default/deploy/jbossweb.sar to include this: <!-- SSL/TLS Connector configuration using the admin devl guide keystore --> <Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="${jboss.server.home.dir}/conf/server.keystore" keystorePass="mypassword" sslProtocol = "TLS" /> (4) The problem is that when JBoss starts it logs this exception (during start-up) (but I am still able to view everything under http://localhost:8080/): 03:59:54,780 ERROR [Http11Protocol] Error initializing endpoint java.io.IOException: Cannot recover key at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:456) at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:139) at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:498) at org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:175) at org.apache.catalina.connector.Connector.initialize(Connector.java:1029) at org.apache.catalina.core.StandardService.initialize(StandardService.java:683) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:821) at org.jboss.web.tomcat.service.deployers.TomcatService.startService(TomcatService.java:313) I do know that's there's more to be done to enable full SSL client authentication....

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >