Search Results

Search found 20313 results on 813 pages for 'batch size'.

Page 653/813 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • How do I stop postfix from handling my mail?

    - by Tatu Ulmanen
    Here's the situation: I have a domain, let's say domain.com. That domain has Google Apps for Business enabled, so all mail delivered to @domain.com will end up at Google (MX records point to Google). I have a PHP script at domain.com that I use to send mail to myself. But when the PHP script tries to send mail to [email protected], Postfix at that server decides that the recipient is a local user (because the address matches the domain Postfix itself is at), and tries to deliver the mail locally. But inevitably fails as the mailbox cannot be found. How can I instruct Postfix to not try to handle locally any emails to @domain.com and just send them forward so Google can pick them up? I have already removed $myhostname from mydestination field in Postfix's main.cf file, and I have restarted Postfix but Postfix still tries to deliver the mail locally. Here's a snip from mail.log that show the problem (addresses replaced): postfix/pickup[20643]: AF718422E5: uid=33 from=<server> postfix/cleanup[20669]: AF718422E5: message-id=<62e706bcca5a0de0bfec6baa576d88a5@server> postfix/qmgr[20642]: AF718422E5: from=<server>, size=517, nrcpt=1 (queue active) postfix/pipe[20678]: AF718422E5: to=<[email protected]>, relay=dovecot, delay=0.62, delays=0.47/0.03/0/0.13, dsn=5.1.1, status=bounced (user unknown) postfix/bounce[20680]: AF718422E5: sender non-delivery notification: 29598422E7 postfix/qmgr[20642]: AF718422E5: removed

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Uploading file > 1 MB on Django admin gives 400 Bad Request response.

    - by ayaz
    I have a small Django (1.2.x) project deployed on Apache (2.x) via mod_wsgi (2.x). In the admin, if I upload a file < 1MB, I can get it through; however, for a file, say, 1.2MB in size, I get a 400 response from the server with "Error 400" in the body only. I am wondering why this is happening. As far as I can see, there is no LimitRequestBody set in Apache configuration. I have tried uploading with several browsers including: Firefox, Chrome, and Safari. In the log file for Apache, there is apparently no entry for requests that gave the 400 error response. This is strange. I should point out that the scenario where this is happening is thus: The project in question is deployed on two identical Apache servers (completely identical setup) that are behind a load balancer. On my development setup, of course, the problem does not surface. Any help with this will be very much appreciated.

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • PLESK PostFix Error Local in maillog, how to troubleshoot

    - by RCNeil
    I'm using the PHP mail() function, using PostFix, on CentOS6, Plesk 10.4, and my email is not getting delivered to a particular address. My personal GMail and Yahoo email addresses receive email from my server fine and do not produce errors. After a wonderful suggestion on here, I checked my mail logs, and this is the error I see : Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: from= <[email protected]>, size=645, nrcpt=1 (queue active) Apr 10 10:26:29 ######### postfix-local[8331]: postfix-local: [email protected], [email protected], dirname=/var/qmail/mailnames Apr 10 10:26:29 ######### postfix-local[8331]: cannot chdir to mailname dir name: No such file or directory Apr 10 10:26:29 ######### postfix-local[8331]: Unknown user: [email protected] Apr 10 10:26:29 ######### postfix/pipe[8330]: 19EA21827: to=<[email protected]>, relay=plesk_virtual, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered via plesk_virtual service) Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: removed [email protected] is the name I've declared in php.ini for sendmail_from = "[email protected]" sendmail_path = "/usr/sbin/sendmail -t -f [email protected]" and the recipient is supposed to be [email protected]. Is this an error on my side or the recipients? Can I address this on my server? Many thanks SF.

    Read the article

  • Asus MyCinema U3100mini Choppy

    - by dsimcha
    I'm running an Asus MyCinema U3100mini ATSC on Windows 7 64-bit. When I play live TV in Windows Media Center, it's very choppy and uses 500+ MB of RAM, I'm guessing due to the hard drive buffering functionality. Is there any way to disable the live TV pause buffer completely? If not, can anyone recommend alternative software that: Works with the MyCinema. Is lightweight and not horribly bloated with features I'll never use like Windows Media Center is. Edits: This is a dual boot system. I've discovered that the tuner actually works fine on XP. It also works fine on my other computer, which has slower hardware and also runs Windows 7 64-bit. The problem actually seems to be with playback at large screen sizes, not with hard drive buffering. Everything works fine below a certain window size and fails for large windows or full screens. Also, the same thing seems to happen whether playing live or recorded TV. As far as the obvious stuff goes, I have the latest video drivers from ATI for my Radeon x1050.

    Read the article

  • how to debug when xinetd says : got signal 17 (child exited)

    - by Faizan Shaik
    I am trying to start services vsftpd and sshd using xinetd. my config files are as follows. /etc/xinetd.conf defaults { instances = 60 log_type = FILE /var/log/xinetdlog log_on_success = HOST PID log_on_failure = HOST cps = 25 30 only_from = localhost } includedir /etc/xinetd.d /etc/xinetd.d/ftp service ftp { disable = no server = /usr/sbin/vsftpd server_args = -l user = root socket_type = stream protocol = tcp wait = no instances = 4 flags = REUSE nice = 10 log_on_success += DURATION HOST USERID only_from = 127.0.0.1 10.0.0.0/24 } /etc/xinetd.d/ssh service ssh { disable = no log_on_failure += USERID server = /usr/sbin/sshd user = root socket_type = stream protocol = tcp wait = no instances = 20 flags = REUSE only_from = 127.0.0.1 10.0.0.0/24 } Even though i've included only_from attribute, vsftp server as well as ssh server are refusing connection from localhost. while vsftp and ssh servers are working fine individually when i check with "service vsftpd start" and "service ssh start". when i did debug using "xinetd -d" throug terminal i got the output as 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ftp 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} Started service: ssh 13/10/20@00:06:08: DEBUG: 3592 {cnf_start_services} mask_max = 8, services_started = 2 13/10/20@00:06:08: NOTICE: 3592 {main} xinetd Version 2.3.14 started with libwrap loadavg options compiled in. 13/10/20@00:06:08: NOTICE: 3592 {main} Started working: 2 available services 13/10/20@00:06:08: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {server_start} Starting service ftp 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3607 {exec_server} duping 9 13/10/20@00:06:16: DEBUG: 3592 {main_loop} active_services = 2 13/10/20@00:06:16: DEBUG: 3592 {main_loop} select returned 1 13/10/20@00:06:16: DEBUG: 3592 {check_pipe} Got signal 17 (Child exited) 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = 3607 13/10/20@00:06:16: DEBUG: 3592 {server_end} ftp server 3607 exited 13/10/20@00:06:16: DEBUG: 3592 {svc_postmortem} Checking log size of ftp service 13/10/20@00:06:16: INFO: 3592 {conn_free} freeing connection 13/10/20@00:06:16: DEBUG: 3592 {child_exit} waitpid returned = -1 both services are getting started but neither of them is working. after banging for 3-4 hours i still don't have any clue about this error. Any help would be appreciated. thanks!

    Read the article

  • Vlans and subinterfaces

    - by Adeodatus
    I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites. I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans. To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3: 3825: interface GigabitEthernet0/1.3 description Vlan-3 encapsulation dot1Q 3 ip address 192.168.0.81 255.255.255.240 the connection between the switch and router: interface GigabitEthernet0/48 description Core-router switchport trunk encapsulation dot1q switchport mode trunk show interfaces gi0/48 switchport Name: Gi0/48 Switchport: Enabled Administrative Mode: trunk Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk Native VLAN tagging: enabled Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Unknown unicast blocked: disabled Unknown multicast blocked: disabled Appliance trust: none So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline?

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

  • Is this processor burned?

    - by Jhonnytunes
    I've recently exchange the processors in two PCChips boards. Both boards are LGA775. Board A is P17G(Pentium 4 HyperThreading 3GHz) and board B is P49G(Pentium Dual Core 3GHz). I use board A to watch videos, and some of them are 3GB size and this is why I exchanged the CPU. I installed Dual Core in board A and it worked out of the box, now 3GB videos use 5% of CPU instead of 50%. When I installed the pentium in the board B, I forgot to connect the 4pin power and, when i powered on the PC, the CPU fan stay off. Then, I connected all right this time, and now the board doesnt show video. I think the CPU is not working but im not sure about that. The PC turns on and the HD spins, the CPU fan spins, network socket blinking, but not video and case power led is neither blinking. I tried with other PSU and everything was the same. I figure out that CPU have that paste above. IDK really what's happening, I hope I dont have to buy another CPU. Is it Burned?

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Computer hangs at boot screen with new RAID card

    - by shanethehat
    I am trying to build a new server around a Biostar TH61 motherboard and an Adaptec 6405E RAID controller card. The machine booted fine from USB before the RAID card and drives were installed. After installing, on the first boot the card was detected, but then started to spit out the following message every 10 seconds: Error: Controller Kernel Stopped Running << Press any key to continue ... Following the troubleshooting guide I unplugged everything, reseated the card, and reattached all the drives. This time the machine is sitting on the boot screen without any error messages and flashing a cursor, but after 15 minutes of this, nothing seems to be happening. Given that there are no error messages I'm hesitant to reboot again. Is it normal for a RAID card to sit without a status message when it firsts boots, maybe to initialise the drives or something? The current screen output looks a bit like this: Controller #00 found at PCI Slot:01, Bus:01, Dev:00, Func:00 Controller Model: Adaptec 6405E Firmware Version: 5.2-0[18512] Memory Size: 128MB Serial number: 111111111111111 SAS WWN: 50000D1104AE9180 _ Update: So after waiting 30 minutes I've rebooted back to the Kernal Stopped Running error. Maybe time to update the RAID BIOS.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Excluding files from web logs

    - by Ray
    I originally tried this question on StackOverflow, but it was suggested that serverfault was a better choice. So, here it is... Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea? The purpose of doing this is to reduce the log file size and the amount of work the server has to do, to make the log file more manageable when I need to dig in to them for troubleshooting, and for my own curiosity. I realize that log file parsers can skip the junk, but I am interested in reducing the raw files, before parsing.

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Set postfix to send email but not to receive them

    - by CodeShining
    I'm using Google Apps to handle personal email addresses for my domain name, and I set up the DNS as Google suggests. All works fine. Now since I need a SMTP to send emails from my e-commerce I installed Postfix on the server. It works fine when I send emails to any email address but it doesn't send to the same domain name, so let's say my domain is example.com, I set postfix using example.com, if I try to reset a password using [email protected] postfix doesn't send and instead reports on the mail.log Sep 20 01:09:52 ip-10-54-26-162 postfix/pickup[6809]: B09A3415D8: uid=33 from=<www-data> Sep 20 01:09:52 ip-10-54-26-162 postfix/cleanup[6854]: B09A3415D8: message-id=<20120920010952.B09A3415D8@ip-10-54-26-162.eu-west-1.compute.internal> Sep 20 01:09:52 ip-10-54-26-162 postfix/qmgr[30978]: B09A3415D8: from=<[email protected]>, size=4234, nrcpt=1 (queue active) Sep 20 01:09:52 ip-10-54-26-162 postfix/local[6856]: B09A3415D8: to=<[email protected]>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=5.1.1, status=bounced (unknown user: "myaccount") Of course it cannot find a local user "myaccount" since that account is on Google Apps... How can I tell Postfix to send the email and do not search for a local user?

    Read the article

  • MAMP MySQL won't start

    - by Tony
    I uninstalled MAMP completely, downloaded fresh copy of MAMP 2 from the MAMP website, did a clean install. However, when I try to start mysql, I get the following error log 111120 21:37:49 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 111120 21:37:50 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 111120 21:37:50 [Note] Plugin 'FEDERATED' is disabled. 111120 21:37:50 InnoDB: The InnoDB memory heap is disabled 111120 21:37:50 InnoDB: Mutexes and rw_locks use GCC atomic builtins 111120 21:37:50 InnoDB: Compressed tables use zlib 1.2.3 111120 21:37:50 InnoDB: Initializing buffer pool, size = 128.0M 111120 21:37:50 InnoDB: Completed initialization of buffer pool 111120 21:37:50 InnoDB: highest supported file format is Barracuda. 111120 21:37:50 InnoDB: Waiting for the background threads to start 111120 21:37:51 InnoDB: 1.1.5 started; log sequence number 1595675 111120 21:37:51 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 111120 21:37:51 [ERROR] Aborting 111120 21:37:51 InnoDB: Starting shutdown... 111120 21:37:51 InnoDB: Shutdown completed; log sequence number 1595675 111120 21:37:51 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete 111120 21:37:51 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended I've no clue why this is happening. I googled around and made sure that no instance of MySQL is running. Nothing seems to help.

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >