Search Results

Search found 25075 results on 1003 pages for 'default trace'.

Page 299/1003 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • How do I increase maximum attachment size in Exchange 2007 SP1?

    - by AspNyc
    I've been looking all over for a relatively simple answer to a fairly straightforward question: "how do I increase the maximum size of attachments that can be sent and/or received in Exchange 2007?". But I have yet to find a solution that works. We have a pretty straightforward setup: Exchange 2007 SP1 running on a single server, with the OWA role delegated to a second server. We did a clean install of Exchange 2007 a year or two ago: we did not upgrade from a previous version. I forget if we installed RTM and then patched it to SP1, or if we installed with SP1 already baked in. I just thought I'd mention those items, in case they influence the answer. So far, I've tried running the following Powershell commands on the main Exchange server and verified that they've taken effect: Set-TransportConfig -MaxReceiveSize 40MB Set-ReceiveConnector "RcvConnector" -MaxMessageSize 40MB Set-MaxReceiveSize "MailboxName" -MaxReceiveSize 40MB As of right now, though, the specified mailbox is still rejecting messages over 10MB. You get brownie points if you can also tell me how to set the default mailbox attachment size limits, so that new accounts don't have default Set-MaxReceiveSize values of "unlimited" they currently do. Any advice or suggestions would be greatly appreciated. Tx in advance!

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • PHP Composer Not Working On Mac

    - by Richard Knop
    I have installed a bitnami mac stack mainly because I require at least PHP 5.4.7 version for my project. However, I have run into an issue with composer. This is the error I get when I run: php composer.phar install --dev The error: Richard-Knops-MacBook-Pro:my-project richardknop$ php composer.phar install --dev dyld: Library not loaded: /Applications/MAMP/Library/lib/libiconv.2.dylib Referenced from: /opt/local/bin/php Reason: Incompatible library version: php requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Richard-Knops-MacBook-Pro:my-project richardknop$ How to solve it?

    Read the article

  • failover cluster file replication

    - by user156144
    I have a Windows 2008 R2 failover cluster server. I am going to move one of our window services onto this new server. The service writes some trace information to a log file on the local harddrive. This will become a problem when it is moved to cluster server when cluster A become unavailable and cluster B takes over and now there are 2 places where I need to look for log files. Is there a way to make sure regardless of which cluster is on, I get one complete log file? I have been researching this and there is something called DFS replication but i was wondering if there is something better that works with failover cluster... I prefer not having to update my code. I can specify it to write log files to a different location by changing app.config file but no code change...

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • monitor network bandwidth via ssh

    - by ServerSideX
    I'm running a Centos 6.4 server with cPanel. WHM (admin side panel) shows about 100GB of bandwidth this month. However, the server's RTG shows 3.4TB last 30 days, 121GB past 24 hours alone. Doesn't make sense. I'm trying to trace the cause of this. It's a shared web hosting server for approximately 300 domains. I would appreciate help tracing this down somehow. I utilize CSF firewall and Configserver exploit scanner as well. Day http://s10.postimg.org/ti1qhj5mx/day.png Week http://s7.postimg.org/8ho8kds57/week.png

    Read the article

  • IIS 8 Random 503 service unavailable

    - by Ivo
    We migrated a busy website to Windows Server 2012 with IIS 8. The website randomly gives the error "503 service unavailable" after an user presses F5 the error is gone again. The website is build in ASP.NET MVC 3. The website runs on one application pool, with default settings There are around 500 to 900 concurrent users on the website during the day, and the error happens more often when there are more 650 users. The CPU and the memory use on the server is stable. There is nothing about the 503 errors in the application log, the IIS log and the event log. Does anyone has any clue what the problem can be or how we can trace the the problem?

    Read the article

  • How do I apply multiple subnets to a server with one NIC?

    - by Cosban
    I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/interfaces has either not worked, or has rendered the network completely useless. This is the current state of the file, which has the secondary IP address working on the same subnet as well as IPv6 working, but not the second subnet. # Nativen IPv6 Schnittstelle iface eth0 inet6 manual # Bridge IPv4 Schnittstelle (176.xxx.xxx.193/27) auto vmbr0 iface vmbr0 inet static address 176.xxx.xxx.196 netmask 255.255.255.224 gateway 176.xxx.xxx.193 broadcast 176.xxx.xxx.223 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 176.xxx.xxx.198/27 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 5.xxx.xxx.166 netmask 255.255.255.248 gateway 5.xxx.xxx.161 broadcast 5.xxx.xxx.167 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 5.xxx.xxx.166/27 dev vmbr1 # Bridge IPv6 Schnittstelle (Reichweite: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx::/64) iface vmbr0 inet6 static address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx netmask 64 up ip -6 route add xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 up ip -6 route add default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0

    Read the article

  • How to enable the 2 concurrent (+1 console) sessions on Windows Server 2012

    - by Dai
    I have a Windows Server 2012 VM running on Windows Azure. I want to enable the ability for 2 simultaneous administrative sessions over Remote Desktop. This is permitted under the EULA for Windows Server 2012. This is NOT the same thing as the fully-blown Terminal Services / RDS feature. In Windows Server 2000 and 2003, multiple concurrent sessions (up to a limit of 2, plus the root /console session) were enabled by default (such that logging-in via RDP without logging-out first would create a new session rather than reconnecting to the old session). In Server 2008 and later it uses single-sessions by default, as this simplifies administration (as most people want to connect to old sessions). In Windows Server 2008 R2, you can add the MMC snap-ins for Remote Desktop Host Configuration which allows you to re-enable concurrent sessions. However, in Server 2012, after adding the Remote Administration snap-ins from Server Manager it seems the Remote Desktop Host Configuration snap-in has been removed. How can I re-enable the multiple concurrent sessions for Remote Desktop for Administration in Windows Server 2012?

    Read the article

  • How to test TempDB performance?

    - by Matt Penner
    I'm getting some conflicting advice on how to best configure our SQL storage with our current SAN. I would like to do some of my own performance testing with a few different configurations. I looked at using SQLIOSim but it doesn't seem to simulate TempDB. Can anyone recommend a way to test data, log and TempDB performance? What about using a SQL profiler trace file from our production system? How would I use This to run against my test server? Thanks, Matt

    Read the article

  • MS SQL Server 2005 Express rebuild master DB problem

    - by PaN1C_Showt1Me
    Hi ! There has been a power loss on our server and i cannot start the SQL service because the master DB is corrupted (as the log states). I found many articles recommending running the setup.exe with optional parameters: This is what I did: I've downloaded SQLEXPR32.EXE from MS page and ran it The first problem was, that it extracted all the setup files and started the default installation procedure. (which was unuseful for me as I need those params). If I canceled it, all the extracted files disappeared. That's why I decided to copy the extracted files somewhere and than cancel the default installation. Now I'm trying to run the setup.exe from the extraction: setup.exe /qb INSTANCENAME=MSSQLSERVER REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=xxxxx it asks me if I want to rewrite the system db, which is what I need, but then while installing I get this error: *An installation package for the product Microsoft SQL Server 2005 Express Edition cannot be found. Try the installation again using a valid copy of the installation package 'SqlRun_SQL.msi'* Then it tries to install something and it states: cannot install because the same instance name already exists. But I don't want to install a new instance .. Any idea how to solve this, please? Thank you in advance !

    Read the article

  • finding the user of iis apppool \ defaultapppool

    - by LosManos
    My IIS apppool user is trying to create a folder but fails. How do I find out which User it is? Let's say I don't know much about IIS7 but need to trace whatever is happening through tools. Place of crime is WinSrv2008 with IIS7. So I fire up Sysinternals/ProcessMonitor to find out what is happening. I find Access denied on a folder just as I suspected. But which user? I add the User column to the output and it says IIS Apppool\Defaultapppool in capitals. Well... that isn't a user is it? If I go to IIS and its Apppools and Advanced settings and Process model and Identity I can see clues about which user it is but that is only because I know IIS. What if it had been Apache or LightHttpd or whatever? How do I see the user to give the appropriate rights to?

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    Hi all, We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • Apache 403 Forbidden Error when accessing local web server using local IP address

    - by amjo324
    I have an odd problem when attempting to browse to pages stored on a local web server (Apache 2.2). The pages are served as expected when I browse to localhost or 127.0.0.1 on port 80. Yet when I attempt to browse to the same pages by referencing the local IP address (192.168.x.x), I receive a HTTP 403 (Forbidden) error. In essence, http://localhost:80 works but 192.168.x.x:80 doesn't even though I'm specifying the IP of the local machine. You may be thinking "who cares? just use localhost". However, this is the first step in troubleshooting why I cannot remotely access these pages from different hosts on my LAN. I'm presuming this can't be a firewall issue as I'm only connecting to the local machine. Even so, I verified there was no iptables rules that could be having an effect. I've checked the Apache error logs and the corresponding line of relevance is: [Sat Oct 19 07:38:35 2013] [error] [client 192.168.x.x] client denied by server configuration: /var/www/ I've inspected most of the apache config files and they don't appear to differ from what you would expect with a default install. I can't see anything in apache2.conf that would be a problem and httpd.conf is an empty file. This is an excerpt from /etc/apache2/sites-enabled/000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Any insight as to where I can look next to find a solution ? Thanks in advance.

    Read the article

  • Virtual host config issues in osx 10.7 server app

    - by Benno
    I have two mac mini lion servers setup to run as production and staging machines. My sysadmin decided on these machines over the previous CentOS we had because it had an "interface" to be able to manage it, rather than just the terminal. To be honest, I prefer the terminal. My problem is, the mac osx 10.7 server.app seems to be having issues with the creation of virtual hosts in the 'Web' section. It seems VERY touchy. For example, I cannot create a http virtual host first. I have to create a https host first with a unique dns name 9e..g vuly6), then create the http host with a different dns name to the first (e.g. www), or it appears to override it the first one, even though one is ssl and one is non-ssl. Further, it seems to override perfectly good configurations at random. For example, the default sites directory is usually /Users/default/Sites/Customsites or something, but sometimes when I load the server.app it changes to /var/empty. Also, if I change or add extra virtual hosts after the first one or two, it starts to mess up and the first two virtual hosts start having issues. Has anyone had any experience with setting up virtual hosts via this app? Am I able to manually create these virtual hosts, without using the app, and without the app overriding my settings when I restart apache?

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Installing mod_mono on Ubuntu: handler doesn't seem to get registered

    - by Trevor Johns
    I'm trying to install mod_mono on Apache 2 (Prefork MPM). I'm using Ubuntu Karmic, and just want an auto-hosting setup (so that any .aspx files are executed, similar to how PHP is normally setup). I did the following to install Mono: $ apt-get install libapache2-mod-mono mono-apache-server2 mono-devel $ a2dismod mod_mono $ a2enmod mod_mono_auto I've confirmed that mod_mono is getting loaded by Apache. However, any .aspx pages I try to load are returned unprocessed and still have an application/x-asp-net MIME type. It's as if the mod_mono handler never gets registered with Apache. Here's the contents of /etc/mod_mono_auto.load: LoadModule mono_module /usr/lib/apache2/modules/mod_mono.so And here's /etc/mod_mono_auto.conf: MonoAutoApplication enabled AddType application/x-asp-net .aspx AddType application/x-asp-net .asmx AddType application/x-asp-net .ashx AddType application/x-asp-net .asax AddType application/x-asp-net .ascx AddType application/x-asp-net .soap AddType application/x-asp-net .rem AddType application/x-asp-net .axd AddType application/x-asp-net .cs AddType application/x-asp-net .config AddType application/x-asp-net .dll DirectoryIndex index.aspx DirectoryIndex Default.aspx DirectoryIndex default.aspx I've even tried setting the handler explicitly: AddHandler mono .aspx .ascx .asax .ashx .config .cs .asmx .asp Nothing seems to help. Any ideas how to get this working?

    Read the article

  • How to start a s3ql script automatically on boot?

    - by ks78
    I've been experimenting with s3ql on Ubuntu 10.04, using it to mount Amazon S3 buckets. However, I'd really like it to mount them automatically. Does anyone know how to do that? I've been working on a script, which works when its run from from the commandline, but for some reason I can't get it to run automatically on boot. Does anyone have any ideas? Here's my script: #! /bin/sh # /etc/init.d/s3ql # ### BEGIN INIT INFO # Provides: s3ql # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO case "$1" in start) # Redirect stdout and stderr into the system log DIR=$(mktemp -d) mkfifo "$DIR/LOG_FIFO" logger -t s3ql -p local0.info < "$DIR/LOG_FIFO" & exec > "$DIR/LOG_FIFO" exec 2>&1 rm -rf "$DIR" modprobe fuse fsck.s3ql --batch s3://mybucket exec mount.s3ql --allow-other s3://mybucket /mnt/s3fs ;; stop) umount.s3ql /mnt/s3fs ;; *) echo "Usage: /etc/init.d/s3ql{start|stop}" exit 1 ;; esac exit 0

    Read the article

  • Pinging an external server through OpenVPN tunnel doesn’t work

    - by qdii
    I have an OpenVPN server and a client, and I want to use this tunnel to access not only 10.0.8.0/24 but the whole internet. So far, pinging the server from the client through the tun0 interface works, and vice versa. However, pinging www.google.com from the client through tun0 doesn’t work (all packets are lost). I figured that I should configure the server so that any packet coming from tun0 in destination of the internet be forwarded, so I came up with this iptables config line: interface_connecting_to_the_internet='eth0' interface_openvpn='tun0' internet_ip_address=`ifconfig "$interface_connecting_to_the_internet" | sed -n s'/.*inet \([0-9.]*\).*/\1/p'` iptables -t nat -A POSTROUTING -o "${interface_connecting_to_the_internet}" -j SNAT --to-source "${internet_ip_address}" echo '1' > /proc/sys/net/ipv4/ip_forward Yet, this doesn’t work, the packets are still lost and I am wondering what could possibly be wrong with my setup. Some details: ip route gives on the server: default via 176.31.127.254 dev eth0 metric 3 10.8.0.0/24 via 10.8.0.2 dev tun0 10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1 127.0.0.0/8 via 127.0.0.1 dev lo 176.31.127.0/24 dev eth0 proto kernel scope link src 176.31.127.109 ip route gives on the client: default via 192.168.1.1 dev wlan0 proto static 10.8.0.1 via 10.8.0.5 dev tun0 10.8.0.5 dev tun0 proto kernel scope link src 10.8.0.6 127.0.0.0/8 via 127.0.0.1 dev lo scope link 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.109 client uses wifi adapter wlan0 and TUN adapter tun0. server uses ethernet adapter eth0 and TUN adapter tun0. the VPN spans on 10.0.8.0/24 both client and linux are using Linux 3.6.1.

    Read the article

  • Different versions of iperf for windows give totally different results

    - by Albert Mata
    Measuring TCP output from a Windows client to Solaris server: WXP SP3 with iperf 1.7.0 -- returns an average around 90Mbit Same client, same server but iperf 2.0.5 for windows -- returns an average of 8.5 Mbit Similar discrepancies have been observed connecting to other servers (W2008, W2003) It's difficult to get to some conclusions when different versions of the same tool provide vastly different results. Example below: C:\tempiperf -v (from iperf.fr) iperf version 2.0.5 (08 Jul 2010) pthreads C:\tempiperf -c solaris10 Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte (default) [ 3] local 10.172.181.159 port 2124 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.2 sec 10.6 MBytes 8.74 Mbits/sec Abysmal perfomance, but now I test from the same host (Windows XP SP3 32bit and 100Mbit) to the same server (Solaris 10/sparc 64bit and 1Gbit running iperf 2.0.5 with default window of 48k) with the old iperf C:\temp1iperf -v iperf version 1.7.0 (13 Mar 2003) win32 threads C:\temp1iperf.exe -c solaris10 -w64k Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte [1208] local 10.172.181.159 port 2128 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [1208] 0.0-10.0 sec 112 MBytes 94.0 Mbits/sec So one iperf with a 64k window says 8.75Mbit and the old iperf with the same window size says 94.0Mbit. These results are constant through repeated tests. From my testing launching iperf(old) with window size "x" and iperf(new) with window size "x" instead of producing the same or very close results produce totally different results. The only difference I see is the old compiled as win32 threads vs. pthreads but parallelism (-P 10) appears to work in both. Anyone has a clue or can recommend a tool that gives results I can trust?? EDIT: Looking at traces from (old) iperf it sets the TCP Window Scale flag to 3 in the SYN packet, when I run the (new) iperf this is set to 0 in the initial packet. A quick analysis of the window size through the exchange shows the (old) iperf moving back and forth but mostly at 32k while the (new) iperf mostly keeps at 64k. Maybe it will help somebody to connect the dots.

    Read the article

  • Securely wiping a file on a tmpfs

    - by Nanzikambe
    I have a script that decrypts some data to a tmpfs, the directory is secure (permissions), the machine's swap is encrypted (random key on boot) and when the script is done it does a 35 pass wipe (Peter Gutmann) of the cleartext on the tmpfs . I do this because I'm aware wiping files on a journaling file system is insecure, data may be recovered. For discussion, here're the relevant bits extracted: # make the tmpfs mkdir /mnt/tmpfs chmod 0700 /mnt/tmpfs mount -t tmpfs -o size=1M tmpfs /mnt/tmpfs cd /mnt/tmpfs # decrypt the data gpg -o - <crypted_input_file> | \ tar -xjpf - # do processing stuff # wipe contents find . -type f -exec bcwipe -I {} ';' # nuke the tmpfs cd .. umount -f /mnt/tmpfs rm -fR /mnt/tmpfs So, my question, assuming for the moment that nobody is able to read the cleartext in the tmpfs while it exists (I use umask to set cleartext to 0600), is there any way any trace of the cleartext could remain either in memory or on disk after the snippet above completes?

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >