Search Results

Search found 47712 results on 1909 pages for 'looking for a script'.

Page 497/1909 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • How would one run a task sequence within a task sequence in SCCM 2012 SP1

    - by BigHomie
    A Shining Example: Inside all of my task sequences I have a group that installs driver packages conditionally based on computer model: And of course, this list does nothing but grow. The fact that it grows isn't a big deal, what is a big deal is that every time it changes I have to manually copy and paste those changes across every task sequence I have, which of course leaves huge room for human error. The same goes for other groups of tasks that are common across task sequences. Looking for a solution where I could centrally manage these tasks, be it link other task sequences to a group within another task sequence, or create a separate task sequence and link to that. I came across a solution by John Marcum (SCCM MVP) that mentioned this ability, but this was a while ago and I can't find the link to it anymore to see if it's even still being updated/maintained, but I'm looking for more of a free solution, or even using Powershell or the ConfigMgr SDK is fine with me, I'm no stranger to either. Update Getting close: http://msdn.microsoft.com/en-us/library/jj217869.aspx

    Read the article

  • recursively "normalize" filenames

    - by user62367
    i mean getting rid of special chars in filenames, etc. i have made a script, that can recursively rename files [http://pastebin.com/raw.php?i=kXeHbDQw]: e.g.: before: THIS i.s my file (1).txt after running the script: This-i-s-my-file-1.txt Ok. here it is: But: when i wanted to test it "fully", with filenames like this: ¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÂÃÄÅÆÇÈÊËÌÎÏÐÑÒÔÕרÙUÛUÝÞßàâãäåæçèêëìîïðñòôõ÷øùûýþÿ.txt áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&'()*+,:;<=>?@[\]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£.txt it fails [http://pastebin.com/raw.php?i=iu8Pwrnr]: $ sh renamer.sh directorythathasthefiles mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†....and so on $ so "mv" can't handle special chars.. :\ i worked on it for many hours.. does anyone has a working one? [that can handle chars [filenames] in that 2 lines too?]

    Read the article

  • recursively "normalize" filenames

    - by user66732
    i have made a script, that can recursively rename files to get rid of special chars, etc. in filenames e.g.: before: THIS i.s my file (1).txt after running the script: This-i-s-my-file-1.txt Ok. here it is: But: when i wanted to test it "fully", with filenames like this: ¤¥¦§¨©ª«¬®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÂÃÄÅÆÇÈÊËÌÎÏÐÑÒÔÕרÙUÛUÝÞßàâãäåæçèêëìîïðñòôõ÷øùûýþÿ.txt áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&'()*+,:;<=>?@[\]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£.txt it fails: $ sh renamer.sh directorythathasthefiles mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory mv: cannot stat `./áíüuúöoóéÁÍÜUÚÖOÓÉ!"#$%&\'()*+,:;<=>?@[]^_`{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ¡¢£': No such file or directory ...and so on so "mv" can't handle special chars.. :\ i worked on it for many hours.. does anyone has a working one? [that can handle chars [filenames] in that 2 lines too?] Q on pastebin: http://pastebin.com/raw.php?i=19iYZpwY

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • anti-virus / malware solution for small non-profit network

    - by Jason
    I'm an IT volunteer at a local non-profit. I'm looking for a good AV/malware solution. We currently use a mishmash of different client solutions, and want to move to something centralized. There is no full time IT staff. What I'm looking for: centralized administration - server is Windows Server 2003 minimal admin overhead ability to do e-mail notification/alerts/reporting would be very cool 10-25 XP Clients (P3/P4 hardware) free or discounted solution for non-profits We can get a cheap license for Symantec Endpoint Protection. My past experience with Symantec has been bad, but I've heard good things about this product. However, I've also read that it's kind of a nightmare to setup and administer, and may not be worth it for the size of our network.

    Read the article

  • How do I set up different configurations on an Ubuntu laptop based on different physical locations?

    - by Andrew Larned
    I'm looking for a way to have a couple (or three) multiple configurations set up on my laptop, and easily switch between them. To be more specific, when my laptop is at work, it's plugged into a second monitor, and has a specific set of networks configurations. At home, the second monitor is gone, the network configurations are different. At a public wireless point there are other configurations to set, etc. I know I can go into my preferences and turn on/turn off the monitor, and mess with the networking preferences, and so on, but I'm looking for a way to change a bunch of preferences all at once, and if it's possible to do that automatically, maybe based on the wireless APs in the vicinity, that would be even better.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Assisting a user using VNC

    - by wizlog
    I'm trying to help a friend with their computer remotely. He has a version of VNC, I'm not sure which one, so I downloaded and installed the VNC Viewer and Listening Viewer. I ask my friend for his IP address and port number, and he tells me that every time he's used it, the person giving him assistance (remotely controlling his computer) gave him some kind of number. I should note that my friend is not tech savy, and it may have been an IP address, but I'm not sure. What kind of number would the user be looking for? Does this indicate his version of VNC, Does that even matter? I am completly forign to VNC and don't know where to begin looking for an answer myself, and (for whatever reason) I can't ask my friend to figure out what version of VNC he is using.

    Read the article

  • On Windows 7, how can I make the Start Menu search feature include matches from within words, not just the start of the word?

    - by Gabriel
    I have a program installed called WinSCP. When I press the Windows key and type "SCP", I get "No items match your search." Is there a configuration option I can set somewhere, so that this item will be found? I'm not looking for a specific solution for this particular program, but something general, so that if there's a program named XYZ, I can find it via the Start Menu search by entering YZ. EDIT TO ADD: I'm looking for a "set-it and forget-it" type of configuration change, so that within-word searching happens always, automatically. I don't want to have to type a * before every query. Apparently this wasn't clear from what I wrote above.

    Read the article

  • Linux Security/Sysadmin Courses in London?

    - by mister k
    Hi, My employer has offered to send me on a couple of training courses and I'm just looking for some recommendations. I'm mainly looking to improve my security and general sysadmin skills. I would like to do something focused on UNIX as I mainly work with Linux boxes (but also a couple of FreeBSD boxes). I don't want to do a study-from-home course, so I would need to find somewhere based in London. It would be great to hear from anyone who has some experience with this kind of course. The courses I've found so far are: www.learningtree.co.uk/courses/uk433.htm www.city.ac.uk/cae/cfa/computing/systems_it/linux.html www.city.ac.uk/cae/cfa/computing/systems_it/unix_tools_ss.html I'm not sure the City University courses are advanced enough as I already have experience... Thanks!

    Read the article

  • Ad Agency storage/file server +backup needed (NAS or something else?)

    - by Rob
    Looking for a "this is all you need" recommendation. We're a small ad agency with both mac & pcs that access and share files from a 3 yr old Windows 2000 box (no server software). We currently have 1TB on the "server" and back it up to 2 different Seagate Free Agent Pro 1TB external drives. But we're low on space and are looking for something that's bigger, that we can still access from Mac & PC, EASY backup system, secure from viruses, firewall enabled. Not sure if a NAS will work or if we should have a real server. We don't really get on that box except to restore files, or run Norton on it. I hope I've provided enough for a general recommendation. Thanks. Rob Phx

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • Weblogic WLST classpath

    - by lepricon28
    When I run the WLST script .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar:/directory/ols_wls/utils/config/10.3/config-launch.jar:/directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar:/directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar:/directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar:/directory/ols_wls/wlserver_10.3/server/lib/webservices.jar:/directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar:/directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin:/directory/ols_wls/modules/org.apache.ant_1.7.0/bin:/directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin:/directory/ols_wls/jrockit_160_14_R27.6.5-32/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/java/j2sdk1.4.2_11/bin/bin:/home/oracle/bin:/directory/wls_olwcs/jdk160_14_R27.6.5-32/bin:/directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/java/j2sdk1.4.2_11/bin/bin:/home/oracle/bin:/directory/wls_olwcs/jdk160_14_R27.6.5-32/bin:/directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • Understanding !d command in sed with respect to saves

    - by richardh
    I have a directory of tab-delimited text files and some have comments in the first few lines that I would like to delete. I know that the first good line starts with "Mark" so I can use /^Mark/,$!d to delete these comments. After this deletion I have several other replacements that I make in the (new) first line that has variable names. My question is, why do I have to save sed's output to get my script to work? I understand that if I line is deleted, then the output doesn't proceed downstream because there is no output. But if I don't delete (i.e., !d) then why do I have to save to file? Thanks! Here is my shell script. (I'm a sed newbie, so any other feedback is also appreciated.) #!/bin/bash for file in *.txt; do mv $file $file.old1 sed -e '/^Mark/,$!d' $file.old1 > $file.old2 sed -e '1s/\([Ss]\)hareholder/\1hrhldr/g'\ -e '1s/\([Ii]\)mmediate/\1mmdt/g'\ -e '1s/\([Nn]\)umber/\1o/g'\ -e '1s/\([Cc]\)ompany/\1o/g'\ -e '1s/\([Ii]\)nformation/\1nfo/g'\ -e '1s/\([Pp]\)ercentage/\1ct/g'\ -e '1s/\([Dd]\)omestic/\1om/g'\ -e '1s/\([Gg]\)lobal/\1lbl/g'\ -e '1s/\([Cc]\)ountry/\1ntry/g'\ -e '1s/\([Ss]\)ource/\1rc/g'\ -e '1s/\([Oo]\)wnership/\1wnrshp/g'\ -e '1s/\([Uu]\)ltimate/\1ltmt/g'\ -e '1s/\([Ii]\)ncorporation/\1ncorp/g'\ -e '1s/\([Tt]\)otal/\1ot/g'\ -e '1s/\([Dd]\)irect/\1ir/g'\ $file.old2 > $file rm -f $file.old* done

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • How to draw diagrams in Open Office?

    - by Amokrane
    Hi, I would like to draw diagrams using Open Office but I didn't find any installed by default. What I am exactly looking for are diagrams that look like the ones that come with MS Office 2007/2010 (like Pyramid diagrams, Star diagrams etc.). Any idea? A plugin to install? Otherwise are there any online services that can do it? (I have tested Cacoo and gliffy but they don't really offer the diagrams that I am looking for). Thanks!

    Read the article

  • Pervasive database backup

    - by Steven
    I'm looking for the best way to backup my pervasive database. I've read the documentation but still have a few questions. It appears that Continuous Operations method only allows me to backup the entire database? So I'd do butil -startbu @filelist, then backup the entire database (copy, rsync, etc), then run butil -endbu @filelist. Looking through the documentation I don't see a way to get transaction logs out of this method; like I would do for MSSQL (BACKUP LOG ACCT TO DISK) or Postgres (archive_command). With rsync, it might be feasible to still do this every 15 minutes. The Archival Logging method means I would have to occasionally stop the database to get a full backup, which is acceptable for me. But can I copy the log files off of the server every 15 minutes, ie log shipping? Thank you.

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >