Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 789/1022 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • Proper way to use hiera with puppetlabs-spec-helper?

    - by Lee Lowder
    I am trying to write some rspec tests for my modules. Most of them now use hiera. I have a .fixures.yml: fixtures: repositories: stdlib: https://github.com/puppetlabs/puppetlabs-stdlib.git hiera-puppet: https://github.com/puppetlabs/hiera-puppet.git symlinks: mongodb: "#{source_dir}" and a spec/classes/mongodb_spec.rb: require 'spec_helper' describe 'mongodb', :type => 'class' do context "On an Ubuntu install, admin and single user" do let :facts do { :osfamily => 'Debian', :operatingsystem => 'Ubuntu', :operatingsystemrelease => '12.04' } end it { should contain_user('XXXX').with( { 'uid' => '***' } ) should contain_group('XXXX').with( { 'gid' => '***' } ) should contain_package('mongodb').with( { 'name' => 'mongodb' } ) should contain_service('mongodb').with( { 'name' => 'mongodb' } ) } end end but when I run the spec test, I get: # rake spec /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color F Failures: 1) mongodb On an Ubuntu install, admin and single user Failure/Error: should contain_user('XXXX').with( { 'uid' => '***' } ) LoadError: no such file to load -- hiera_puppet # ./spec/fixtures/modules/hiera-puppet/lib/puppet/parser/functions/hiera.rb:3:in `function_hiera' # ./spec/classes/mongodb_spec.rb:15 Finished in 0.05415 seconds 1 example, 1 failure Failed examples: rspec ./spec/classes/mongodb_spec.rb:14 # mongodb On an Ubuntu install, admin and single user rake aborted! /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color failed Tasks: TOP => spec_standalone (See full trace by running task with --trace) Module spec testing is relatively new, as is hiera. So far I have been unable to find any suitable solutions. (the back and forth on puppet-dev was interesting, but not helpful). What changes do I need to make to get this to work? Installing puppet from a gem and hacking on rubylib isn't a viable solution due to corporate policy. I am using Ubuntu 12.04 LTS + Puppet 2.7.17 + hiera 0.3.0.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • CentOS iscsi initiator has session but there is no block device

    - by jcalfee314
    I have installed the scsi-target-utils package on CentOS and I used it to perform a discovery. The discovery did give me an active session. I restarted the iscsi service but I do not see any new devices (fdisk -l). I see in /var/log/messages that my connection is operational now. I'm not sure how to debug this further. Can someone direct me into fixing this? discovery: iscsiadm -m discovery -t sendtargets -p 192.168.0.155 returns: 192.168.0.155:3260,-1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 Just to verify it actually worked: iscsiadm -m session returns tcp: [1] 192.168.0.155:3260,1 iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 restarting as the directions say to do: service iscsi restart output written to /var/log/message Stopping iscsi: Sep 20 12:14:22 localhost kernel: connection1:0: detected conn error (1020) [ OK ] Starting iscsi: Sep 20 12:14:22 localhost kernel: scsi1 : iSCSI Initiator over TCP/IP Sep 20 12:14:22 localhost iscsid: Connection1:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is shutdown. Sep 20 12:14:22 localhost iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. [ OK ] [root@db iscsi]# Sep 20 12:14:23 localhost iscsid: Connection2:0 to [target: iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3, portal: 192.168.0.155,3260] through [iface: default] is operational now Ran a login command: iscsiadm -m node -T iqn.2009-02.com.twinstrata:cloudarray:sn-1d07c1b62d4ec8f3 -p 192.168.0.155 -l No errors, no logging occurred. Next I compared the output from "fdisk -l|egrep dev" both with the iscsi session and without. There is no difference. I suppose I could just look in /etc/mtab. Any ideas on how I can get an iscsi device?

    Read the article

  • Why won't this script accept any arguments?

    - by Nate Wagar
    I'm trying to write an SVN post-commit hook and, strangely, am getting hung up on what should be the easiest part. The Script: set REPO="$1" set REV="$2" set SVNBIN="/opt/CollabNet_Subversion/bin/" set SSHBIN="/usr/bin/ssh" set HOST="staging.domain.net" set timeout=30 set USERNAME="svn-usr" set E_NO_CONNECT=2 set E_WRONG_PASS=3 set E_UNKOWN=25 set CHANGED=`"$SVNBIN"svnlook changed --revision $REV $REPOS` echo "Here are changes: $CHANGED" >> /var/svn/repos/www/logs/testing echo "Command: $0; Repo: $REPO; Rev: $REV; Total: $#" >> /var/svn/repos/www/logs/testing set PROJECT "" Yet when I call it, it doesn't seem to be seeing the arguments I pass to it: /var/svn/repos/www/logs> sudo ../hooks/post-commit /var/svn/repos/www 33 svnlook: missing argument: --revision Type 'svnlook help' for usage. /var/svn/repos/www/logs> cat testing Here are changes: Command: ../hooks/post-commit; Repo: ; Rev: ; Total: 1 This is on a Solaris 10 SPARC box. I'm a bit of a script newbie, but shouldn't this be really easy??

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Installing and running a guest OS on KVM-qemu with only serial console access

    - by nixnotwin
    I am trying to installing a bsd distro with virt-installer. With a Linux distro I used this: virt-install -n debian -r 1024 --vcpus=1 --accelerate -v --disk /var/kvm/installation-disks/debian.img,size=6--nographics --network=bridge:br0,model=ne2k_pci,mac=52:54:00:66:68:09 -l http://ftp.de.debian.org/debian/dists/squeeze/main/installer-amd64/current/images/ -x console=ttyS0,115200 This loads the installer directly from the online mirror. With Fedora I used this mirror: http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ Are there such mirrors for freebsd or openbsd? The reason I want direct installable ftp/http mirrors is because I can access my physical server only via ssh, and it doesn't have a X server or a window manager to give me a VNC GUI. When I tried installing centos 6 with an online mirror I was able to finish the installation via serial console, but after I rebooted it, the serial console never worked for me. I tried everything possible---editing menu.lst, inttab and securtty files. Fedora 16 booted fine from serial console, but got stuck when it loaded anaconda installer. I tried editing freebsd iso installation media by adding serial console option to boot option. And installation was successful. But couldn't boot into it becuase it wasn't giving console acess. I couldn't edit any files as ufs partition cannot be loaded with write access on my Ubuntu server 10.04. Only debian squeeze worked well, it worked for me even without editing a single configuration file. I want to have CLI versions of fedora/centos and freebsd/openbsd. But, looks like there isn't any hope for me to have them, as I have to depend on a serial console to do everything.

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • Not able to find scripts present in /etc/profile.d directory [on hold]

    - by priya
    I am using Red Hat Linux 6.0 ... using davinchi board. I have to change system clock resolution so I am changing (HZ) env var. For this I have written script so that I can change HZ = 1000 n insert that script in /etc/profile.d and write code for loop in /etc/profile so that while running as usual /etc/profile can load the scripts present in /etc/profile.d. But when I am logging into the system at root level then showing error as "-bash: ./etc/profile.d/resolution.sh(my script name): No such file or directory Also here why it is showing ./etc and not /etc . Is something related to that?? Also I tried to add script in /etc/init.d but still no change in value of HZ takes place. Please tell where to change so that this env var can get changed. The script(resolution.sh) written has :- #!/bin/bash export HZ=1000 The content of /etc/profile which I entered is: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then .$i fi done unset i fi And the output of grep command is -rw-r--r-- 1 root root 535 Feb 4 2004 profile -rwxr-xr-x 2 root root 4096 Feb 2 2004 profile.d

    Read the article

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • Ways to have audio output without wires

    - by viraptor
    I'm trying to find a way of using my home speakers/amp without actually having to connect them. There are two laptops that use them normally (so I don't like changing the connection all the time) and I'd rather move the speakers to a place that's away from the couch. I'm not sure how to do this though... The options I can think of are: some kind of wireless jack-jack connection finally getting a media server Unfortunately I can't find any good product for the first solution. I've seen some headphones which have the receiver integrated and a separate transmitted, so in general the idea is already out there, just not the way I need ;) I've seen also http://www.miccus.com/products/blubridge-mini-jack, but I'd have to have a compatible receiver which I can't find on its own (maybe there's some application that the media server could use?). As far as media server goes... many of the plug servers look really interesting, but I'm not sure how to create an audio output and how to redirect the input really. None of the plug servers I've seen so far advertises the option of audio output jack port. I think this part could be fixed by getting one with an usb port and a separate cheap usb soundcard. I hope that input can be sorted out in some rather simple way. I've got Linux running on both laptops so I hope that would be possible to configure jack/pulse/whatever to use the remote endpoint, or even write a simple local-/dev/dsp:network:media-server-/dev/dsp forwarder. So the main question is... are there better ways? Are there any out of the box solutions? Or maybe this was already done by someone and described somewhere?

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • How to best tune my SAN/Initiators for best performance?

    - by Disco
    Recent owner of a Dell PowerVault MD3600i i'm experiencing some weird results. I have a dedicated 24x 10GbE Switch (PowerConnect 8024), setup to jumbo frames 9K. The MD3600 has 2 RAID controllers, each has 2x 10GbE ethernet nics. There's nothing else on the switch; one VLAN for SAN traffic. Here's my multipath.conf defaults { udev_dir /dev polling_interval 5 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout none path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes # prio rdac } blacklist { device { vendor "*" product "Universal Xport" } # devnode "^sd[a-z]" } devices { device { vendor "DELL" product "MD36xxi" path_grouping_policy group_by_prio prio rdac # polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And iscsid.conf : node.startup = automatic node.session.timeo.replacement_timeout = 15 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 After my tests; i can barely come to 200 Mb/s read/write. Should I expect more than that ? Providing it has dual 10 GbE my thoughts where to come around the 400 Mb/s. Any ideas ? Guidelines ? Troubleshooting tips ?

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Linux apache developing configuration

    - by Jeffrey Vandenborne
    Recenly reinstalled my system, and came to a point where I need apache and php. I've been searching a long time, but I can't figure out how to configure apache the best way for a developer computer. The plan is simple, I want to install apache 2 + mysql server so I can develop some php website. I don't want to install lamp though, just the apache2, php5 and mysql. The problem that I've been looking an answer for is the permissions on the /var/www/ folder. I've tried making it my folder using the chown command, followed by a chmod -R 755 /var/www. Most things work then, but fwrite for example won't work, because I need to give write permissions to everyone, unless I change my global umask to 000 I'm not sure what I can do. In short: I want to install apache2, php5, mysql-server without using lamp, but configured in a way so I can open up netbeans, start a project with root in /var/www/, and run every single function without permission faults. Does anyone have experiences or workarounds to this? Extra: OS: Ubuntu 10.04 ARCH: x86_64

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • Scroll wheel causes browsers + Windows explorer to go back

    - by KaptajnKold
    I use Windows 7 on a VirtualBox VM on a Mac. Lately, when I'm using any browser (IE9, Chrome or FireFox or even Windows Explorer), use of the scroll wheel followed by any movement of the mouse cursor causes the browser to go back. Very annoying. This happens when I use the scroll wheel on a USB connected mouse (brand/model unknown, since I don't have it in front of me as I write this) or when I use two-finger scrolling on the trackpad when no mouse is connected. When I connect from my VM to a remote Windows box (Windows Server 2008), I experience the same problem. I have tried rebooting the VM to no avail. I am not sure when the problem started exactly. It may or may not have been after I connected the USB mouse for the first time, but trying to unplug it and then rebooting didn't help. I have tried to google for a solution, but all I've found are people who accidentally pressed the shift key while scrolling, which will cause the browser to go backward or forward in the browser history. This however is not the problem I'm having. To be clear, in my case the browser only goes back when I move the mouse after I've used the scroll wheel. I'm at my wits end :(

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >