Search Results

Search found 64472 results on 2579 pages for 'data context'.

Page 705/2579 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • mysql spitting lots of "table marked as crashed" errors

    - by Shawn
    Hi, I have a mysql server(version: 5.5.3-m3-log Source distribution ) and it keeps showing lots of 110214 3:01:48 [ERROR] /usr/local/mysql/libexec/mysqld: Table './mydb/tablename' is marked as crashed and should be repaired 110214 3:01:48 [Warning] Checking table: './mydb/tablename' I'm wondering what can be the possible casues and how to fix it. Here is a full list mysql configuration : connect_errors = 6000 table_cache = 614 external-locking = FALSE max_allowed_packet = 32M sort_buffer_size = 2G max_length_for_sort_data = 2G join_buffer_size = 256M thread_cache_size = 300 #thread_concurrency = 8 query_cache_size = 512M query_cache_limit = 2M query_cache_min_res_unit = 2k default-storage-engine = MyISAM thread_stack = 192K transaction_isolation = READ-COMMITTED tmp_table_size = 246M max_heap_table_size = 246M long_query_time = 3 log-slave-updates = 1 log-bin = /data/mysql/3306/binlog/binlog binlog_cache_size = 4M binlog_format = MIXED max_binlog_cache_size = 8M max_binlog_si ze = 1G relay-log-index = /data/mysql/3306/relaylog/relaylog relay-log-info-file = /data/mysql/3306/relaylog/relaylog relay-log = /data/mysql/3306/relaylog/relaylog expire_logs_days = 30 key_buffer_size = 1G read_buffer_size = 1M read_rnd_buffer_size = 16M bulk_insert_buffer_size = 64M myisam_sort_buffer_size = 2G myisam_max_sort_file_size = 5G myisam_repair_threads = 1 max_binlog_size = 1G interactive_timeout = 64 wait_timeout = 64 skip-name-resolve slave-skip-errors = 1032,1062,126,1114,1146,1048,1396 The box is running on centos-5.5. Thanks for your help.

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • Birt on Tomcat unable to find JARs

    - by LostInTheWoods
    First, my setup: BiRT Runtime: 3.7.2. Ubuntu 10.04 Tomcat 6 Sun Java 1.6.0 I have a jar file I want to deploy onto the Tomcat server so it is usable by the runtime, so I placed the jar file in /var/lib/tomcat6/webapps/birt/WEB-INF/lib. As I understand it this is the default location for JAR files that are going to be used by a BiRT report. But the jar file is not accessible by the report that is trying to call it. In the BiRT logs I see: Error evaluating Javascript expression. Script engine error: ReferenceError: "DynDSinfo" is not defined. (/report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"]#20) Script source: /report/data-sources/oda-data-source[@id="54"]/method[@name="beforeOpen"], line: 0, text: __bm_beforeOpen() org.eclipse.birt.data.engine.core.DataException: Fail to execute script in function __bm_beforeOpen(). Source: "DynDSinfo" is the class I am trying to reference.. and now for the kicker... this works fine on Tomcat6 on Windows 7. The same files in the same places. So is there some additional configuration or some environmental variable that needs to be set, or something different on the Linux (Ubuntu) platform? All help or ideas gratefully received, Stephen

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • Apache + mod_fcgid + perl = error 500

    - by f-aminov
    Hi guys! I'm trying to setup Apache2.2 with mod_fcgid and libapache2-mod-perl2 with no luck. I've created a fcgi-bin directory in the root directory of my website and put there a test.fcgi file with the following content: #!/usr/bin/perl use CGI; print "This is test.fcgi!\n"; While trying to access it via http://www.website.dom/fcgi-bin/test.fcgi I get error 500 (Internal Server Error). Here is my vhost config: <VirtualHost 95.131.29.226:8080> ServerName website.com DocumentRoot /var/www/data/website.com SuexecUserGroup user group ServerAlias www.website.com AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml <Directory "/var/www/data/website.com/fcgi-bin/"> Options +ExecCGI Allow from all Order allow,deny AddHandler fcgid-script .fcgi </Directory> </VirtualHost> fcgid.conf: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi SocketPath /var/lib/apache2/fcgid/sock IdleTimeout 3600 ProcessLifeTime 7200 MaxProcessCount 8 DefaultMaxClassProcessCount 2 IPCConnectTimeout 8 IPCCommTimeout 60 </IfModule> SuExec log: [2010-04-06 03:02:47]: uid: (500/equ) gid: (502/equ) cmd: test.fcgi Apache error log: test! test! [Tue Apr 06 03:02:51 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26267) exit(communication error), terminated by calling exit(), return code: 0 [Tue Apr 06 03:02:53 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26261) exit(server exited), terminated by calling exit(), return code: 0 I've no clue why I'm getting error 500, but when I'm trying to access this file using console ($ perl /var/www/data/website.com/fcgin-bin/test.fcgi) everthing works fine without any errors... Any suggestions on how to solve this problem would be greatly appreciated. Thank you!

    Read the article

  • Receiving Event ID: 10107, Hyper-V -VMMS

    - by Stargaten
    We are using physical disk on two of Guest operating systems. Is this a know issue? Do we need to have DPM 2010? "One or more physical disks are attached to virtual machine 'Myserver'. Back up programs that use the Hyper-V VSS writer cannot back up volumes that are attached to virtual machines as physical disks. To avoid potential data loss, use another method to back up the data on the physical disks. If you restore the data on this virtual machine, make sure to check the data of the physical disk for integrity. (Virtual machine ID 8EF3C0CB-967D-4D67-B4D8-7B782C7AC07C)"

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • Is there a way to copy the file paths of ALL active windows' opened files?

    - by frenchglen
    The context of this question, is here. I actually know the answer to my question is yes - like I say in my context, this program can do it (somehow). So the really question is, how can I do it. What code do I put in a simple batch file? (or do I have to compile a tiny c++ exe? and run that exe in the batch file?) Might this code be a solution? (.....EnumDesktopWindows?) To clarify, I know that there's e.g. NirSoft OpenedFilesView which shows allll the technically-opened files in Windows, but it's just the 'visible' user-opened ones that I'm after (whatever you can see in Task Manager's Applications tab). Thanks.

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • MySQL 5.6 won't start on OS X - ambiguous option

    - by MaticPetek
    I would like to try MySQL 5.6 on my machine, but I cannot start it. I always get an error : [ERROR] /usr/local/mysql-5.6.5-m8-osx10.6-x86/bin/mysqld: ambiguous option '--log=/var/log/mysqld.log' (log-bin, log_slave_updates) my.cnf: [mysqld]<br/> pid-file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/mysql.pid<br/> log-error=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-error.log<br/> log-slow-queries=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-slowquery.log<br/> log-bin=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-bin.log<br/> general_log_file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-general_log_file.log<br/> log=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql.log<br/> I try to set "log" and "log-bin" paramether in my.cnf file and also as start parameters for mysqld, but with no luck. Any idea what I can do? Thank you. My environment: OS X 10.6.8 mysql-5.6.5-m8-osx10.6-x86 (not _x64 version) Note: I'm also running Mysql 5.5 on this machine (different port and socket). I also try to stop this instance but I get the some error.

    Read the article

  • Coldfusion deployment under Apache Tomcat with Virtual Hosts

    - by smalltiger85
    Hello, I'm looking for the proper way to share the Coldfusion engine for all my Virtual Hosts. This is how I have the virtual hosts configured on server.xml in the Tomcat conf: <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="cfusion/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"/> </Host> <Host name="mysite1" appBase="webapps" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/mnt/webroot/mysite1" /> </Host> This is not valid for me because I'm instantiating one Coldfusion process for every virtual host, and it requires me to copy the WEB-INF folder in every site. My question is, is there any way to share the same Coldfusion instance for every virtual host, mantaining the sites webroot outside of the cfusion folder? I'm using Apache Tomcat 6 and Coldfusion Enterprise edition 8. Many thanks.

    Read the article

  • Content server backups

    - by Dan Sosedoff
    What is the best way to backup data on content servers? For example, I have 15 servers that just have content, no applications running on it. Each server has a 250 GB hard drive. So, it's a pretty big amount of data. All the data have external access (via HTTP). So, the question is: what methodology is best in my case? The most useful method I know is cross-backup: when each server contains its own data and backup of one other server. But, there is significant reduction in total capacity. RAID?

    Read the article

  • Mapping tomcat apache worker

    - by metamorpheus
    I am running an Apache2 server connected with Tomcat5.5 Workers.properties workers.tomcat_home=/usr/share/tomcat5.5 workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=worker1 worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 worker.worker1.lbfactor=1 The JkMount is defined as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel debug JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkMount /jsp-examples worker1 JkMount /jsp-examples/* worker1 JkMount /servlets-examples worker1 JkMount /servlets-examples/* worker1 JkMount /tcontainer worker1 JkMount /tcontainer/* worker1 If i call 127.0.0.1/servlets-examples, i get the examples displayed and executed correctly. If i call [same server as above]/tcontainer, i get the following error: The requested resource (/tcontainer) is not available. (this is an error provided by tomcat5.5) How can i define where to get the sources? i have a configuration file in /usr/share/tomcat-5.5-webapps/tcontainer.xml: <Context path="/tcontainer" docBase="/var/www/web96/html/tcontainer" debug="0" privileged="true" allowLinking="true"> </Context> What did i forget to configure or what is wrong with my definitions? Thanks

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • zfs rename/move root filesystem into child

    - by Anton
    Similar question exists but the solution (using mv) is awful because in this case it works as "copy, then remove" rather than pure "move". So, I created a pool: zpool create tank /dev/loop0 and rsynced my data from another storage in there directly so that my data is now in /tank. zfs list NAME USED AVAIL REFER MOUNTPOINT tank 591G 2.10T 591G /tank Now I've realized that I need my data to be in a child filesystem, not in /tank filesystem directly. So how do I move or rename the existing root filesystem so that it becomes a child within the pool? Simple rename won't work: zfs rename tank tank/mydata cannot rename to 'tank/mydata': datasets must be within same pool (Btw, why does it complain the datasets are not within same pool when if fact I only have one pool?) I know there are solutions that involve copying all the data (mv, or sending the whole dataset to another device and back), but shouldn't there be a simple elegant way? Just noting that I do not care of snapshots at this stage (there are none yet to care of).

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Windows Task Scheduler fails on EventData instruction

    - by Pete
    The Scheduled Task fails on the Event Data instruction in this XML: <ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventData">Event/EventData/Data</Value> </ValueQueries> The other 2 fields can be passed as arguments and the EventData syntax matches other websites, so I don't know why it's failing. This is the Event Viewer XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Aptify.ExceptionManagerPublishedException" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-11-07T19:39:14.000000000Z" /> <EventRecordID>97555</EventRecordID> <Channel>Application</Channel> <Computer>[Computer Name]</Computer> <Security /> </System> <EventData> <Data>General Information ********************************************* Additional Info: ExceptionManager.MachineName: [Computer Name] ExceptionManager.TimeStamp: 11/7/2013 12:39:14 PM ExceptionManager.FullName: AptifyExceptionManagement, Version=4.0.0.0, Culture=neutral, PublicKeyToken=[key] ExceptionManager.AppDomainName: Aptify Shell.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: ACA_DOMAIN\pbassett 1) Exception Information ********************************************* Exception Type: Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntityValidationException Entity: Tasks ErrorString: Task Type "Make Contact" is not active. MachineName: [machine] CreatedDateTime: 11/7/2013 12:39:14 PM AppDomainName: Aptify Shell.exe ThreadIdentityName: WindowsIdentityName: [identity] Severity: 0 ErrorNumber: 0 Message: Task Type "Make Contact" is not active. Data: System.Collections.ListDictionaryInternal TargetSite: Boolean Save(Boolean, System.String ByRef, Sys tem.String) HelpLink: NULL Source: AptifyGenericEntity StackTrace Information ********************************************* at Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntity.Save(Boolean AllowGUI, String& ErrorString, String TransactionID)</Data> </EventData> </Event>

    Read the article

  • Excel 2013: Is it possible to collapse rows only in a specific column?

    - by h7u9i
    In my spreadsheet, I'm trying to figure out a way to collapse rows in a specific column. Right now, if I do Data - Group - Group... - Rows, it'll collapse the entire row. I want to collapse rows only in a specific column. Example: |---------|----------| | hi | + data | |---------|----------| | hello | + data2 | |---------|----------| | | | |---------|----------| | | | And opening data 1 would turn into: |---------|----------| | hi | - data1 | |---------|----------| | hello | point1 | |---------|----------| | | point2 | |---------|----------| | | + data2 | |---------|----------| | | | |---------|----------| | | | Is this possible to do in Excel?

    Read the article

  • Unable to logon to vpn

    - by nitin pande
    My openvpn client log file- The interesting bit: Tue Oct 26 12:32:49 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:32:49 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:32:49 2010 TCP/UDP: Closing socket The rest of the log just in case: Tue Oct 26 12:32:35 2010 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Tue Oct 26 12:32:48 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:32:48 2010 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Oct 26 12:32:48 2010 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Oct 26 12:32:48 2010 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Oct 26 12:32:48 2010 LZO compression initialized Tue Oct 26 12:32:48 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:32:48 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:32:48 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:32:48 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:32:48 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:32:48 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 12:32:48 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 12:32:48 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 12:32:49 2010 TLS: Initial packet from 67.228.223.12:3389, sid=bd5f79fe 8475497f Tue Oct 26 12:32:49 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:32:49 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:32:49 2010 TCP/UDP: Closing socket Tue Oct 26 12:32:49 2010 SIGUSR1[soft,tls-error] received, process restarting Tue Oct 26 12:32:49 2010 Restart pause, 5 second(s) Tue Oct 26 12:32:54 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:32:54 2010 Re-using SSL/TLS context Tue Oct 26 12:32:54 2010 LZO compression initialized Tue Oct 26 12:32:54 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:32:54 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:32:54 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:32:54 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:32:54 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:32:54 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 12:32:54 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 12:32:54 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 12:32:54 2010 TLS: Initial packet from 67.228.223.12:3389, sid=1643b931 ce240d5f Tue Oct 26 12:32:54 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:32:54 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:32:54 2010 TCP/UDP: Closing socket Tue Oct 26 12:32:54 2010 SIGUSR1[soft,tls-error] received, process restarting Tue Oct 26 12:32:54 2010 Restart pause, 5 second(s) Tue Oct 26 12:32:59 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:32:59 2010 Re-using SSL/TLS context Tue Oct 26 12:32:59 2010 LZO compression initialized Tue Oct 26 12:32:59 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:32:59 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:32:59 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:32:59 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:32:59 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:33:00 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 12:33:00 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 12:33:00 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 12:33:00 2010 TLS: Initial packet from 67.228.223.12:3389, sid=cd439fb2 d625ca0d Tue Oct 26 12:33:00 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:33:00 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:33:00 2010 TCP/UDP: Closing socket Tue Oct 26 12:33:00 2010 SIGUSR1[soft,tls-error] received, process restarting Tue Oct 26 12:33:00 2010 Restart pause, 5 second(s) Tue Oct 26 12:33:05 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:33:05 2010 Re-using SSL/TLS context Tue Oct 26 12:33:05 2010 LZO compression initialized Tue Oct 26 12:33:05 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:33:05 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:33:05 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:33:05 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:33:05 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:33:06 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 12:33:06 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 12:33:06 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 12:33:06 2010 TLS: Initial packet from 67.228.223.12:3389, sid=28f0cb87 69c90cde Tue Oct 26 12:33:06 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:33:06 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:33:06 2010 TCP/UDP: Closing socket Tue Oct 26 12:33:06 2010 SIGUSR1[soft,tls-error] received, process restarting Tue Oct 26 12:33:06 2010 Restart pause, 5 second(s) Tue Oct 26 12:33:11 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:33:11 2010 Re-using SSL/TLS context Tue Oct 26 12:33:11 2010 LZO compression initialized Tue Oct 26 12:33:11 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:33:11 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:33:11 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:33:11 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:33:11 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:33:11 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 12:33:11 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 12:33:11 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 12:33:12 2010 TLS: Initial packet from 67.228.223.12:3389, sid=128becf9 f62adf0c Tue Oct 26 12:33:12 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 12:33:12 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 12:33:12 2010 TCP/UDP: Closing socket Tue Oct 26 12:33:12 2010 SIGUSR1[soft,tls-error] received, process restarting Tue Oct 26 12:33:12 2010 Restart pause, 5 second(s) Tue Oct 26 12:33:17 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 12:33:17 2010 Re-using SSL/TLS context Tue Oct 26 12:33:17 2010 LZO compression initialized Tue Oct 26 12:33:17 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 12:33:17 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 12:33:17 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 12:33:17 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 12:33:17 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 12:33:20 2010 TCP/UDP: Closing socket Tue Oct 26 12:33:20 2010 SIGTERM[hard,init_instance] received, process exiting Guys I am extremely sorry for not presenting my error Log properly, please forgive me and give me your valuable advice. I am using windows 7 and I am using openvpn mainly to bypass censorship at UAE. I am using only client config file. Ca.crt file is in config folder Thanks and regards Nitin My error Log with Config1 file Tue Oct 26 21:24:34 2010 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Tue Oct 26 21:24:46 2010 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Tue Oct 26 21:24:46 2010 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Oct 26 21:24:46 2010 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Oct 26 21:24:46 2010 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Oct 26 21:24:46 2010 LZO compression initialized Tue Oct 26 21:24:46 2010 Control Channel MTU parms [ L:1544 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Oct 26 21:24:46 2010 Data Channel MTU parms [ L:1544 D:1450 EF:44 EB:135 ET:0 EL:0 AF:3/1 ] Tue Oct 26 21:24:46 2010 Local Options hash (VER=V4): 'ee93268d' Tue Oct 26 21:24:46 2010 Expected Remote Options hash (VER=V4): 'bd577cd1' Tue Oct 26 21:24:46 2010 Attempting to establish TCP connection with 67.228.223.12:3389 Tue Oct 26 21:24:47 2010 TCP connection established with 67.228.223.12:3389 Tue Oct 26 21:24:47 2010 TCPv4_CLIENT link local: [undef] Tue Oct 26 21:24:47 2010 TCPv4_CLIENT link remote: 67.228.223.12:3389 Tue Oct 26 21:24:47 2010 TLS: Initial packet from 67.228.223.12:3389, sid=4244e662 e5a0572a Tue Oct 26 21:24:47 2010 TLS Error: cannot locate HMAC in incoming packet from 67.228.223.12:3389 Tue Oct 26 21:24:47 2010 Fatal TLS error (check_tls_errors_co), restarting Tue Oct 26 21:24:47 2010 TCP/UDP: Closing socket Tue Oct 26 21:24:47 2010 SIGUSR1[soft,tls-error] received, process restarting client config file: client dev tun proto tcp remote openvpn1.flashvpn.com 3389 float resolv-retry infinite nobind persist-key persist-tun ca ca.crt ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 mute 20 auth-user-pass route-method exe route-delay 2

    Read the article

  • Formatted mac external hard drive loaded on pc

    - by kjokay
    I have an issue where it appears that an external hard drive which had been formatted on a mac system was loaded as a drive in Windows. Windows is obviously unable to read the data and now the drive won't mount in the mac. It appears that Windows overwrote something concerning the drive's information on what filesystems and types it has on it. Mac diskutility is unable to repair the drive and the partition is showing up in the utility as a FAT32. Using an applexsoft utility, I am able to verify the data is still on this drive, but I'd rather not spend $100 to save these files (its not my hard drive anyways). Is there a way I can use some UNIX commands to find out the partition information on the drive, back the raw data up on it, then restore the data back onto the drive after re-formatting it again?

    Read the article

  • RAID 5 configuration and future expansion

    - by Alexis Hirst
    hi, I am building a PC to act as a file server among other things, and I was wondering whether it is a good idea to create 2 partitions on the RAID 5 array, one for OS one for data, or to have a separate disk for OS and use array for data. Also, one day i may want to add another disk to the array, so would there be any issues if I had the OS partition on the RAID5 array when it came to resizing the data partition?

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >