Search Results

Search found 20087 results on 804 pages for 'css3 multiple backgrounds'.

Page 123/804 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • best way to quickly share multiple photos without permanently hosting them

    - by dsollen
    I find that I'm often asked to share lots of photos with someone, enough that uploading each one individually to them gets tedious when I would like to drag and drop the whole bunch. I could put them on photobucket, but some of them are semi-private; private enough that I don't want them to be easily found on image hosting sites. Are there any convenient ways of sharing these photos quickly but still being able to remove them from the inter-webs afterwards (without too much hassle)? I have found Yahoo Messenger complete version has great photo sharing options; but not everyone has it and I can't expect people to download it just to see some photos.

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • Compare 2 sets of data in Excel and returning a value when multiple columns match

    - by Susan C
    I have a data set for employees that contains name and 3 attributes (job function, job grade and location). I then have a data set for open positions that contains the requisition number and 3 attributes (job function, job grade and job location). For every employee, i would like the three attributes associated with them compared to the same three attributes of the open positions and have the cooresponding requisition numbers displayed for each employee where there is a match.

    Read the article

  • Amazon ELB and use of address / server names across multiple servers

    - by Stpn
    I am setting up Nginx servers behind the ELB. I set up so that api.app.com points to an ELB. I wonder which addresses I should use for remote connections, Nginx settings etc.. 1) For example, in Nginx: Should I do server { listen 80; #What is the right line here: # server_name <WWW.NAME.COM> OR <ec2-.....compute-1.amazonaws.com> OR <MLB-....amazonaws.com>?; passenger_enabled on; ..... } 2) I connect servers behind ELB to remote Postgres database. In Postgres settings should I open the ELB address (MLB-...amazonws.com) or to individual EC2 IPs?

    Read the article

  • kvm-over-ip, multiple machines per cable run

    - by Sirex
    I'm looking at getting a kvm-over-ip setup for a server room. Typically these devices have 16 or so cat5 leads that come out of them and then a convertor that converts each cat5 into a vga & ps2 pair. Can you run one cable from the unit into a switch, and then leads from the switch into each machine ? I have several machines on the other side of the server room that i'd like to have avaliable but i dont want to run 16 cables to them. I'm thinking this should be possible being IP layer and all, but as each device normally has its own cable out the back of the kvm unit i'm not certain. Perhaps the kvm's rear ports act essentially like a switch anyway in which case it should work, or perhaps if i run all 16 cables into a seperate switch right next to it and aggregate the ports together, run one cable to a switch on the other side of the room with similar number of ports agregated together, then use that switch to plug each macine into ? I'm fairly sure this is possible, but i just want to check before i shell out the cash as i've never tried it.

    Read the article

  • Enable multiple audio output on Windows 7

    - by patrick
    For Windows 7, 64 bit: I have a digital SPDIF output to my stereo, which controls speakers in other rooms. I also have a set of speakers connected to the regular audio jack at the computer. This allows me to send music to the kitchen while my child plays games on the computer. Works great. Except when I'm playing games and still want to listen to music. ;-D I know I can manually switch WMP to play through the speakers instead of SPDIF, but I was wondering if there's any way to enable simultaneous audio out in Windows 7? Virtual Audio Card is a non-starter because I'm running 64 bits and the VAC driver isn't signed.

    Read the article

  • Multiple VLANs on a single subnet

    - by mstaessen
    I would like to establish the setup shown below. The image is taken from (http://gcharriere.com/blog/?p=620) and explains how to set this up on a brocade device. I would like to use an ubuntu server to do the routing. Right now, the switch and the server/router are connected with a trunk and the server uses the vlan package, kernel module and (inner) subnets for routing. I would like that: no IP addresses get lost in the subnetting (outer subnet is /26, inner subnets are /28) I don't want the rigorous subdivision of my outer subnet. I want to assign a VLAN to any IP in the outer subnet. How do I need to configure my interfaces? What is the "ubuntu" translation of "ip follow ve"? Thanks!

    Read the article

  • Multiple OS on a single machine

    - by Veejay
    I have a new harddrive. I want to install XP, Vista and Windows 7 on it Is it possible and what should be the order. I have heard there is some MBR rewriting issue? Any 'simple' article/tutorials are welcome

    Read the article

  • Multiple subdomains, SSL on only one using port 80

    - by Emil Flink
    I am running a Apache2 server with three subdomains defined in separate files in /etc/apache2/sites-available I need ONE of those subdomains to be SSL-secured on port 80 for an application to work. Port 80 is required due to circumstances out of my control. The other subdomains are also on port 80. Now, when all subdomains are enabled in Apache the SSL-subdomain is NOT running SSL. If I disable the other subdomains SSL on the SSL-subdomain WORKS. Is there a way to fix this?

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • icacls batch file multiple directories with wildcards help needed

    - by user153521
    I have written the following batch file that does a great job combing through all folders beginning with the number 3 and applying folder permissions to any 2010 subfolder. Example of the batch filesis below: for /D %%f in (D:\Data\3*) do icacls "%%f\2010" /inheritance:r /grant:r "Domain Admins":(OI)(CI)F Question : How can I improve this script to allow for me to apply the permissions to a specific folder below ANY folder within the folders beginning with 3? here is an example of my failed attempt: for /D %%f in (D:\Data\3*) do icacls "%%f*\specificfolder" /inheritance:r /grant:r "Domain Admins":(OI)(CI)F

    Read the article

  • Distribute outgoing connections among multiple IPs configured on the same NIC

    - by cedivad
    I have a NIC with 2 aliases on it. The network interface has 3 IPs configured on it. Think about it like this: i can ping the same server by hitting .100 .101 and .102. I want the source address of the outgoing connections to be distributed among these ip. So if i have 3 opened connections, one connection will have result as having an IP address ending with .100, the other two should result as having as ip addresses .101 and .102. I'm using FreeBSD but I think this question to be Linux-Like wide.

    Read the article

  • L2TP and multiple interfaces on the machine

    - by Alex
    We have setup ipsec and l2tp on linux. One question came up (due to firewall management policy) is whether it's possible to have 1 virtual interface instead of one per connected client. Now we have: ppp0 serverip clientip1 ppp1 serverip clientip2 Want to have: l2tp_tun serverip serverip like with OpenVPN's tun interfaces and then to be able to push IP address and route to each client.

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • How to synchonise address books across multiple PC's

    - by Rob
    I am using Thunderbird 15.0.1 on two different Windows computers (Vista and Win 7). I maintain the email list for a local club, and often need to send out emails to all members from my home and office Pc's. I use a Thunderbird Address Book to maintain a list of club members and would like any changes made to that list to synchronise to both machines. Can anyone recommend a good way of doing this? I've found a couple of extensions that claim to do this, but both seem to have review that suggest they no longer function correctly since they are quite old extensions. Is there another tool that I would be better off using? Rob

    Read the article

  • VMware - Watching multiple virtual machine screens

    - by mr.b
    Hi, I don't know even if I put the question right, but here's what I'd like to accomplish. I have avoided specifying exact Vmware virtualization product here because I'm not sure which one would be most suitable for the task at hand. I am developing an application that works in local network. This application has to run on several computers at the same time, and it's important to me, as a developer and tester, to see (literally) how it behaves at all times on all computers. Is there any way to connect to screens of virtual machines deployed on ESX, ESXi, Sever 2.0, or some other product, so that I can see something like grid of screens, say 4x4 or 6x4 or whatever number of scaled screens, at the same time? Ability to interact with screens directly from grid (by double-clicking a screen, for instance, and then getting full resolution screen) would be greatly appreciated, of course. I hope that someone understood what I meant here. :)

    Read the article

  • dhclient append settings from multiple DHCP servers

    - by Brian
    I have a server with two interfaces connected to two separate networks, using DHCP for both. When dhclient is writing /etc/resolv.conf, I would like it to append settings that aren't already there. For instance, if I receive from one DHCP server: nameserver 10.0.0.1 search one.mydomain.com and from another: nameserver 10.1.1.254 search two.mydomain.com Then resolv.conf should look like this: search one.mydomain.com two.mydomain.com nameserver 10.0.0.1 nameserver 10.1.1.254 At the moment, it seems the last dhclient overwrites whatever was there. I know I can preconfigure settings in dhclient.conf using supercede or append, but then I have to hard-code the values. I've scoured the man page for dhclient, but it seems like dhclient prefers to work alone (i.e. not in conjunction with any other dhclients)...or am I missing something?

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • mount multiple folders with nfs4 on centos

    - by microchasm
    I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server) /etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync) /var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra (Machine 2 - client) /etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1 mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated.

    Read the article

  • Active Directory - Using GPO To Update Multiple Versions Of .NET

    - by Joe Wilson
    OK, I have searched everywhere for this one. I have all the MSI's and packages I need to deploy .Net 3.5 SP1, and 2.0 and 3.0 (which are prerequisites for 3.5). I can't figure out how to install all of them at once via GPO. Basically, the computers on the network do NOT have any version of .Net installed, and I need them to be at 3.5 SP1. I know I can deploy each version via GPO, force reboot the client, then push the next one, force reboot, and so on. Is there a way to streamline install all 3 at once via GPO? Thanks

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Cumulative average using data from multiple rows in an excel table

    - by Aaron E
    I am trying to calculate a cumulative average column on a table I'm making in excel. I use the totals row for the ending cumulative average, but I would like to add a column that gives a cumulative average for each row up to that point. So, if I have 3 rows I want each row to have a column giving the average up to that row and then the ending cumulative average in the totals row. Right now I can't figure this out because I'd be having to reference in a formula rows above and below the current row and I'm unsure about how to go about it because it's a table and not just cells. If it was just cells then I know how to do the formula and copy it down each row, but being that the formula I need depends on whether or not a new row in the table is added or not I keep thinking that my formula would be something like: (Completion rate row 1/n) where n is the number of rows up to that point, here row 1, then ((Completion rate row 1 + Completion rate row 2)/n) for row 2 so n=2, and so on for each new row added. Please advise.

    Read the article

  • Corrupted NTFS Drive showing multiple unallocated partitions

    - by volting
    My external hdd with a single NTFS partition was accidentaly plugged out (kids!)... and is now corrupted. Iv tried running ntfsfix - with no luck - output below.. When I look at the disk under disk management in Windows 7 it shows up as having 5 partitions 2 of which are unallocated - none have drive letters and it is not possible to set any (that option and most others are greyed out) - so I can't run chkdsk /f Iv tried using Minitool partition wizard which was mentioned as a solution to another similar question here. It showed the whole drive as one partition, but as unallocated, and the option -- "Check File System" was greyout. Is there anything else I could try ? Output of fdisk -l Disk /dev/sdb: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytest I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 218129509 1920119918 850995205 72 Unknown /dev/sdb2 ? 729050177 1273024900 271987362 74 Unknown /dev/sdb3 ? 168653938 168653938 0 65 Novell Netware 386 /dev/sdb4 2692939776 2692991410 25817+ 0 Empty Partition table entries are not in disk order Output of ntfsfix me@vaio:/dev$ sudo ntfsfix /dev/sdb Mounting volume... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Attempting to correct errors... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Failed to startup volume: Input/output error Checking for self-located MFT segment... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument OK ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error Volume is corrupt. You should run chkdsk. Options available with MiniTool: Related questions: How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it? Repair corrupted NTFS File System

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >