Search Results

Search found 51910 results on 2077 pages for 'run level'.

Page 377/2077 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • How to use WPA2 client mode in the Linux-based Cisco WAP4410N access point

    - by joechip
    I have a Cisco WAP4410N access point that I want to use as a client to connect to a WPA2 wireless network (for WLAN service monitoring purposes). Supposedly this access point supports a "Wireless Client/Repeater" mode that allows to do this. The Repeater function is optional (I have that box unchecked so that nobody can connect to this access point wirelessly). I have verified through SSH that the access point gets configured as a client and not as a Master. But it never associates to the SSID I ask it to. This is what iwconfig shows: ath04 IEEE 802.11ng ESSID:"myownssid" Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:14 dBm Sensitivity=1/3 Retry:off RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/94 Signal level=161/162 Noise level=161/161 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Although I've never done this from the command line, I suppose I could use wpa_supplicant or wpa_client to associate it, but I don't know how to do that without editing configuration files and the filesystem is readonly. Besides, I would have to run those commands manually after every reboot. I'd like to know how to do this the Cisco way, if possible. If not, any trick to make this work would be useful. Edit: This is with the latest firmware, 2.0.4.2. And I found that not all of the filesystem is readonly, since /var and /tmp are mounted with type ramfs.

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • What is wrong with my Watcher (incron-like) daemon?

    - by eric01
    I have installed Watcher this way: both watcher.py and watcher.ini are located in /etc I also installed pyinotify and it does work when I use python -m pyinotify -v /var/www However, I want to use the daemon and when I start watcher.py, I get weird lines on my watcher.log (see below). I also included my watcher.ini file. Note: I have the latest version of Python. The watcher.py can be found here What is wrong with what I did? Also, do I really need pyinotify? Thanks a lot for your help watcher.ini: [DEFAULT] logfile=/var/log/watcher.log pidfile=/var/run/watcher.pid [job1] watch=/var/www events=create,delete,modify recursive=true command=mkdir /home/mockfolder ## just using this as test watcher.log: 2012-09-22 04:28:23.822029 Daemon started 2012-09-22 04:28:23.822596 job1: /var/www Traceback (most recent call last): File "/etc/watcher.py", line 359, in <module> daemon.start() File "/etc/watcher.py", line 124, in start self.run() File "/etc/watcher.py", line 256, in run autoadd = self.config.getboolean(section,'autoadd') File "/usr/lib/python2.7/ConfigParser.py", line 368, in getboolean v = self.get(section, option) File "/usr/lib/python2.7/ConfigParser.py", line 618, in get raise NoOptionError(option, section) ConfigParser.NoOptionError: No option 'autoadd' in section: 'job1'

    Read the article

  • Degraded RAID-5 array with lvm2 lost superblock and partition table

    - by Fred Phillips
    I have a RAID-5 array of 4x1TB hard disks with one lvm2 partition on Ubuntu Linux 10.04 LTS. One of the disks has failed. I have re-assembled the array without this failed disk but now mdadm --examine claims the array has no superblock and fdisk says it has no partition table. What can I do to recover the data? # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Mar 5 14:43:49 2011 Raid Level : raid5 Array Size : 2930276352 (2794.53 GiB 3000.60 GB) Used Dev Size : 976758784 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Mar 5 15:06:49 2011 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : boba:1 (local to host boba) UUID : 52eb4bc9:c3d8aab5:e0699505:e0e1aa05 Events : 18 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 65 1 active sync /dev/sde1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 17 - faulty spare /dev/sdb1 # mdadm --examine /dev/md0 mdadm: No md superblock detected on /dev/md0. # fdisk -l /dev/md0 Disk /dev/md0: 3000.6 GB, 3000602984448 bytes 2 heads, 4 sectors/track, 732569088 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1572864 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdb1[4](F) sda1[0] sdd1[2] sde1[1] 2930276352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] unused devices: <none>

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Administrator view all mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. ServerFault has an answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • Administrator view ALL mapped drives

    - by kskid19
    In my understanding of security, an administrator should be able to view all connections to and from a computer - just as they can view all processes/owner, network connections/owning process. However, Windows 8 seems to have disabled this. As administrator running an elevated in Win Vista+ when you run net use you get back all drives mapped, listed as unavailable. In Windows 8, the same command run from an elevated prompt returns "There are no entries in the list". The behavior is identical for powershell Get-WmiObject Win32_LogonSessionMappedDisk. A workaround for persistent mappings is to run Get-ChildItem Registry::HKU*\Network*. This does not include temporary mappings (in my particular example it was created through explorer on an administrator account and I did not select "Reconnect at sign-in") Is there a direct/simple way for Administrator to view connections of any user (short of a script that runs under each user context)? I have read Some Programs Cannot Access Network Locations When UAC Is Enabled but I do not think it particularly applies. I have seen this answer, but it still does not address non-persistent drives How can I tell what network drives users have mapped?

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • can't load IA 32-bit .dll on a AMD 64 bit platform

    - by user101425
    I have a Windows 2003 64 bit terminal server which we run a Java application from. The application has always worked up until 2 days ago. No new updates have been installed to the server in that time frame. I have tried re-installing java 64 bit but still have the following error. Unexpected exception: java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.javaws.Launcher.executeApplication(Unknown Source) at com.sun.javaws.Launcher.executeMainClass(Unknown Source) at com.sun.javaws.Launcher.doLaunchApp(Unknown Source) at com.sun.javaws.Launcher.run(Unknown Source) at java.lang.Thread.run(Unknown Source) **Caused by: java.lang.UnsatisfiedLinkError: C:\Documents and Settings\administrator\Application Data\Sun\Java\Deployment\cache\6.0\19\625835d3-5826d302-n\swt-win32-3116.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform** at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:100) at org.eclipse.swt.internal.win32.OS.<clinit>(OS.java:18) at org.eclipse.swt.graphics.Device.init(Device.java:563) at org.eclipse.swt.widgets.Display.init(Display.java:1784) at org.eclipse.swt.graphics.Device.<init>(Device.java:99) at org.eclipse.swt.widgets.Display.<init>(Display.java:363) at org.eclipse.swt.widgets.Display.<init>(Display.java:359) at com.ko.StartKO.main(StartKO.java:57) ... 9 more

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my MacBook Air running Mac OS X 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when I run it from there. I want to be able to run it from the Terminal. After some googling, I tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but I don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • How to add a privilege to an account in Windows?

    - by mark
    Given: A VM running Windows 2008 I am logged on there using my domain account (SHUNRANET\markk) I have added the "Create global objects" privilege to my domain account: The VM is restarted (I know logout/logon is enough, but I had to restart) I logon again using the same domain account. It seems still to have the privilege: I run some process and examine its Security properties using the Process Explorer. The account does not seem to have the privilege: This is not an idle curiousity. I have a real problem, that without this privilege the named pipe WCF binding works neither on Windows 2008 nor on Windows 7! Here is an interesting discussion on this matter - http://social.msdn.microsoft.com/forums/en-US/wcf/thread/b71cfd4d-3e7f-4d76-9561-1e6070414620. Does anyone know how to make this work? Thanks. EDIT BTW, when I run the process elevated, everything is fine and the process explorer does display the privilege as expected: But I do not want to run it elevated. EDIT2 I equally welcome any solution. Be it configuration only or mixed with code. EDIT3 I have posted the same question on MSDN forums and they have redirected me to this page - http://support.microsoft.com/default.aspx?scid=kb;EN-US;132958. I am yet to determine the relevance of it, but it looks promising. Notice also that it is a completely coding solution that they propose, so whoever moved this post to the ServerFault - please reinstate it back in the StackOverflow.

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Cisco Configuration backup with Windows Script.

    - by Jeff
    We have a client with a lot of Cisco Devices and we would like to automate the backups of these devices through telnet. We have both 2003 and 2008 servers and ideally use tftp to back it up. I wrote this: Set WshShell = WScript.CreateObject("WScript.Shell") Dim fso Set fso = CreateObject("Scripting.FileSystemObject") Dim ciscoList ciscoList = "D:\Scripts\SwitchList.txt" Set theSwitchList = fso.OpenTextFile(ciscoList, 1) Do While theSwitchList.AtEndOfStream <> True cisco = theSwitchList.ReadLine Run "cmd.exe" SendKeys "telnet " SendKeys cisco SendKeys "{ENTER}" SendKeys "USERNAME" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "en" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "copy startup-config tftp{ENTER}" SendKeys "(TFTP IP){ENTER}" SendKeys "FileName.txt{ENTER}" SendKeys "exit{ENTER}" 'close telnet session' SendKeys "{ENTER}" 'get command prompt back SendKeys "{ENTER}" SendKeys "exit{ENTER}" 'close cmd.exe On Error Resume Next WScript.Sleep 3000 Loop Sub SendKeys(s) WshShell.SendKeys s WScript.Sleep 300 End Sub Sub Run(command) WshShell.Run command WScript.Sleep 100 WshShell.AppActivate command WScript.Sleep 300 End Sub But the problem with this is the sendkeys are sent to the console session, I'm trying to find a solution that would not require a user to be logged in. Does anyone have any ideas? I have some knowlage of VBS, PowerShell and a pretty good grasp on batching.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • NAT confusion regarding cisco ASA5510

    - by LonelyLonelyNetworkN00b
    I'm setting up my first cisco firewalls. A little information first:I have two asa5510 setup in a working active/standby pair. From my ISP i have two public subnets. A /29 and a /26. On my DMZ interface i have the /26 configured. On my WAN Interface i have configured the /29 IPs. My isp routes the /26 via the /29 primary IP. I'm running ASA 8.2. I've turned NAT-Control off, because i don't want to use nat for for other than some internal interfaces. In essence i don't want to use NAT unless i specify it. I have a internal interface with the network of 192.168.100.0/24. I've tried setting up nat limke this: nat (inside) 1 192.168.100.0 255.255.255.0 global (WAN) 1 interface I was under the impression that this would let connections that was going from 192.168.100.0/24 and out the WAN interface to be Port-Address-translated. I'm not getting this to work for some reason. Inside interface has security level of 100, and wan has security level of 0.

    Read the article

  • Problems with bash script, mysql inserts and launchd

    - by Armands
    ========= I am developing a automated system, which consists of 3 parts: mysql, bash and launchd. Bash script takes folders of work related stuff, zips, archives and puts info about them into database that is located on a local MAMP server. Everything works as expected when I run the script from terminal. But when I use Launchd to automatically run this script, it functions without errors and it does not put the values into database. I've tried to make logs of returned messages, but the logs end up being empty as the command has run the way it was supposed to. Any help would be appreciated! .plist contents <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.adevo.ari.zip</string> <key>ProgramArguments</key> <array> <string>/Volumes/Archive-Plus/B-ARCHIVE-PLUS/ZZ_UTILITY_FOLDER/Compress.sh</string> </array> <key>Nice</key> <integer>1</integer> <key>StartInterval</key> <integer>120</integer> <key>RunAtLoad</key> <true/> </dict> </plist> I made this .plist file just by searching the web.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >